cache-strategy

Design and implement caching layers for APIs and web applications using Redis or Memcached. Use when you need to reduce database load, improve response times, or handle traffic spikes. Covers cache-aside, write-through, and write-behind patterns, TTL strategies, cache invalidation, and stampede prevention. Trigger words: cache, Redis, Memcached, TTL, cache invalidation, response time, throughput, rate limiting.

26 stars

Best use case

cache-strategy is best used when you need a repeatable AI agent workflow instead of a one-off prompt.

Design and implement caching layers for APIs and web applications using Redis or Memcached. Use when you need to reduce database load, improve response times, or handle traffic spikes. Covers cache-aside, write-through, and write-behind patterns, TTL strategies, cache invalidation, and stampede prevention. Trigger words: cache, Redis, Memcached, TTL, cache invalidation, response time, throughput, rate limiting.

Teams using cache-strategy should expect a more consistent output, faster repeated execution, less prompt rewriting.

When to use this skill

  • You want a reusable workflow that can be run more than once with consistent structure.

When not to use this skill

  • You only need a quick one-off answer and do not need a reusable workflow.
  • You cannot install or maintain the underlying files, dependencies, or repository context.

Installation

Claude Code / Cursor / Codex

$curl -o ~/.claude/skills/cache-strategy/SKILL.md --create-dirs "https://raw.githubusercontent.com/TerminalSkills/skills/main/skills/cache-strategy/SKILL.md"

Manual Installation

  1. Download SKILL.md from GitHub
  2. Place it in .claude/skills/cache-strategy/SKILL.md inside your project
  3. Restart your AI agent — it will auto-discover the skill

How cache-strategy Compares

Feature / Agentcache-strategyStandard Approach
Platform SupportNot specifiedLimited / Varies
Context Awareness High Baseline
Installation ComplexityUnknownN/A

Frequently Asked Questions

What does this skill do?

Design and implement caching layers for APIs and web applications using Redis or Memcached. Use when you need to reduce database load, improve response times, or handle traffic spikes. Covers cache-aside, write-through, and write-behind patterns, TTL strategies, cache invalidation, and stampede prevention. Trigger words: cache, Redis, Memcached, TTL, cache invalidation, response time, throughput, rate limiting.

Where can I find the source code?

You can find the source code on GitHub using the link provided at the top of the page.

SKILL.md Source

# Cache Strategy

## Overview
This skill helps you design and implement multi-layer caching strategies for high-traffic APIs. It covers choosing the right caching pattern for your data access profile, configuring TTLs, preventing cache stampedes, and setting up cache invalidation that actually works in production.

## Instructions

### 1. Analyze the caching opportunity
Before adding caching, identify what to cache by examining query patterns:

```typescript
// Instrument your API routes to log response times and call frequency
// Look for: high frequency + low change rate = best cache candidates
// Example analysis output:
// GET /api/products      → 12,000 req/min, changes every 30min → CACHE (TTL: 5min)
// GET /api/products/:id  → 8,000 req/min, changes on update   → CACHE (invalidate on write)
// POST /api/orders       → 200 req/min, always unique          → DO NOT CACHE
// GET /api/user/profile  → 3,000 req/min, changes rarely       → CACHE (TTL: 15min)
```

### 2. Implement cache-aside pattern (most common)
The application checks cache first, falls back to database, then populates cache:

```typescript
import Redis from "ioredis";

const redis = new Redis({ host: "localhost", port: 6379, maxRetriesPerRequest: 3 });

async function getCached<T>(
  key: string,
  fetcher: () => Promise<T>,
  ttlSeconds: number = 300
): Promise<T> {
  const cached = await redis.get(key);
  if (cached) return JSON.parse(cached);

  const data = await fetcher();
  await redis.set(key, JSON.stringify(data), "EX", ttlSeconds);
  return data;
}

// Usage in route handler
app.get("/api/products/:id", async (req, res) => {
  const product = await getCached(
    `product:${req.params.id}`,
    () => db.products.findById(req.params.id),
    600 // 10 minutes
  );
  res.json(product);
});
```

### 3. Prevent cache stampedes
When a popular key expires, hundreds of requests hit the database simultaneously:

```typescript
async function getCachedWithLock<T>(
  key: string,
  fetcher: () => Promise<T>,
  ttlSeconds: number = 300
): Promise<T> {
  const cached = await redis.get(key);
  if (cached) return JSON.parse(cached);

  const lockKey = `lock:${key}`;
  const acquired = await redis.set(lockKey, "1", "EX", 10, "NX");

  if (acquired) {
    try {
      const data = await fetcher();
      await redis.set(key, JSON.stringify(data), "EX", ttlSeconds);
      return data;
    } finally {
      await redis.del(lockKey);
    }
  }

  // Another process is refreshing — wait and retry
  await new Promise((r) => setTimeout(r, 100));
  return getCachedWithLock(key, fetcher, ttlSeconds);
}
```

### 4. Cache invalidation strategies
Pattern-based invalidation for related data:

```typescript
async function invalidatePattern(pattern: string): Promise<void> {
  let cursor = "0";
  do {
    const [nextCursor, keys] = await redis.scan(cursor, "MATCH", pattern, "COUNT", 100);
    if (keys.length > 0) await redis.del(...keys);
    cursor = nextCursor;
  } while (cursor !== "0");
}

// On product update: invalidate product and any list containing it
async function onProductUpdate(productId: string): Promise<void> {
  await redis.del(`product:${productId}`);
  await invalidatePattern("products:list:*");
}
```

## Examples

### Example 1: API response caching
**Prompt:** "Add Redis caching to my Express API. The /api/catalog endpoint is called 15,000 times per minute but the data only changes when an admin publishes updates. I want sub-10ms response times."

**Agent output:**
- Adds `src/cache/redis-client.ts` with connection pooling and retry config
- Wraps `/api/catalog` with cache-aside pattern, 5-minute TTL
- Adds stampede prevention with distributed locking
- Creates `src/cache/invalidation.ts` — called from the admin publish endpoint to bust catalog cache
- Adds cache hit/miss metrics via response headers (`X-Cache: HIT` / `X-Cache: MISS`)

### Example 2: Multi-layer caching
**Prompt:** "Our product API serves 50,000 RPM. Add in-memory cache for the hottest 1,000 items and Redis for everything else. Products change when inventory updates."

**Agent output:**
- Adds `src/cache/memory-lru.ts` using an LRU cache with 1,000 max entries and 60-second TTL
- Adds `src/cache/tiered-cache.ts` that checks memory → Redis → database in sequence
- Creates `src/events/inventory-handler.ts` that invalidates both cache layers on inventory change
- Adds `/admin/cache/stats` endpoint showing hit rates for each layer

## Guidelines

- **Cache-aside is the default** — use write-through only when you need guaranteed cache freshness on writes.
- **Never cache without a TTL** — even "permanent" data should have a long TTL (1 hour+) as a safety net.
- **Use key namespacing** — prefix keys like `products:v2:{id}` so you can version your cache schema.
- **Monitor hit rate** — below 80% means your TTL is too short or your data changes too fast for caching.
- **Serialize carefully** — JSON.parse/stringify is fine for most cases but consider MessagePack for large payloads.
- **Plan for Redis downtime** — your app should degrade gracefully to direct database queries, not crash.
- **Avoid caching user-specific data in shared caches** without proper key isolation — data leaks are a security incident.

Related Skills

product-strategy

26
from TerminalSkills/skills

Expert guidance for product strategy, helping product leaders define product vision, craft positioning, analyze competitive landscapes, choose pricing models, and build outcome-driven roadmaps. Applies frameworks from Marty Cagan (Empowered), April Dunford (Obviously Awesome), Gibson Biddle (DHM Model), and Reforge.

pricing-strategy

26
from TerminalSkills/skills

When the user wants help with pricing decisions, packaging, or monetization strategy. Also use when the user mentions 'pricing,' 'pricing tiers,' 'freemium,' 'free trial,' 'packaging,' 'price increase,' 'value metric,' 'Van Westendorp,' 'willingness to pay,' or 'monetization.' This skill covers pricing research, tier structure, and packaging strategy.

launch-strategy

26
from TerminalSkills/skills

When the user wants to plan a product launch, feature announcement, or release strategy. Also use when the user mentions 'launch,' 'Product Hunt,' 'feature release,' 'announcement,' 'go-to-market,' 'beta launch,' 'early access,' 'waitlist,' or 'product update.' This skill covers phased launches, channel strategy, and ongoing launch momentum.

free-tool-strategy

26
from TerminalSkills/skills

When the user wants to plan, evaluate, or build a free tool for marketing purposes — lead generation, SEO value, or brand awareness. Also use when the user mentions "engineering as marketing," "free tool," "marketing tool," "calculator," "generator," "interactive tool," "lead gen tool," "build a tool for leads," or "free resource." This skill bridges engineering and marketing — useful for founders and technical marketers.

content-strategy

26
from TerminalSkills/skills

When the user wants to plan a content strategy, decide what content to create, or figure out what topics to cover. Also use when the user mentions "content strategy," "what should I write about," "content ideas," "blog strategy," "topic clusters," or "content planning." For writing individual pieces, see copywriting. For SEO-specific audits, see seo-audit.

zustand

26
from TerminalSkills/skills

You are an expert in Zustand, the small, fast, and scalable state management library for React. You help developers manage global state without boilerplate using Zustand's hook-based stores, selectors for performance, middleware (persist, devtools, immer), computed values, and async actions — replacing Redux complexity with a simple, un-opinionated API in under 1KB.

zoho

26
from TerminalSkills/skills

Integrate and automate Zoho products. Use when a user asks to work with Zoho CRM, Zoho Books, Zoho Desk, Zoho Projects, Zoho Mail, or Zoho Creator, build custom integrations via Zoho APIs, automate workflows with Deluge scripting, sync data between Zoho apps and external systems, manage leads and deals, automate invoicing, build custom Zoho Creator apps, set up webhooks, or manage Zoho organization settings. Covers Zoho CRM, Books, Desk, Projects, Creator, and cross-product integrations.

zod

26
from TerminalSkills/skills

You are an expert in Zod, the TypeScript-first schema declaration and validation library. You help developers define schemas that validate data at runtime AND infer TypeScript types at compile time — eliminating the need to write types and validators separately. Used for API input validation, form validation, environment variables, config files, and any data boundary.

zipkin

26
from TerminalSkills/skills

Deploy and configure Zipkin for distributed tracing and request flow visualization. Use when a user needs to set up trace collection, instrument Java/Spring or other services with Zipkin, analyze service dependencies, or configure storage backends for trace data.

zig

26
from TerminalSkills/skills

Expert guidance for Zig, the systems programming language focused on performance, safety, and readability. Helps developers write high-performance code with compile-time evaluation, seamless C interop, no hidden control flow, and no garbage collector. Zig is used for game engines, operating systems, networking, and as a C/C++ replacement.

zed

26
from TerminalSkills/skills

Expert guidance for Zed, the high-performance code editor built in Rust with native collaboration, AI integration, and GPU-accelerated rendering. Helps developers configure Zed, create custom extensions, set up collaborative editing sessions, and integrate AI assistants for productive coding.

zeabur

26
from TerminalSkills/skills

Expert guidance for Zeabur, the cloud deployment platform that auto-detects frameworks, builds and deploys applications with zero configuration, and provides managed services like databases and message queues. Helps developers deploy full-stack applications with automatic scaling and one-click marketplace services.