exa-architecture-variants

Choose and implement Exa architecture patterns at different scales: direct search, cached search, and RAG pipeline. Use when designing Exa integrations, choosing between simple search and full RAG, or planning architecture for different traffic volumes. Trigger with phrases like "exa architecture", "exa blueprint", "how to structure exa", "exa RAG design", "exa at scale".

25 stars

Best use case

exa-architecture-variants is best used when you need a repeatable AI agent workflow instead of a one-off prompt.

Choose and implement Exa architecture patterns at different scales: direct search, cached search, and RAG pipeline. Use when designing Exa integrations, choosing between simple search and full RAG, or planning architecture for different traffic volumes. Trigger with phrases like "exa architecture", "exa blueprint", "how to structure exa", "exa RAG design", "exa at scale".

Teams using exa-architecture-variants should expect a more consistent output, faster repeated execution, less prompt rewriting.

When to use this skill

  • You want a reusable workflow that can be run more than once with consistent structure.

When not to use this skill

  • You only need a quick one-off answer and do not need a reusable workflow.
  • You cannot install or maintain the underlying files, dependencies, or repository context.

Installation

Claude Code / Cursor / Codex

$curl -o ~/.claude/skills/exa-architecture-variants/SKILL.md --create-dirs "https://raw.githubusercontent.com/ComeOnOliver/skillshub/main/skills/jeremylongshore/claude-code-plugins-plus-skills/exa-architecture-variants/SKILL.md"

Manual Installation

  1. Download SKILL.md from GitHub
  2. Place it in .claude/skills/exa-architecture-variants/SKILL.md inside your project
  3. Restart your AI agent — it will auto-discover the skill

How exa-architecture-variants Compares

Feature / Agentexa-architecture-variantsStandard Approach
Platform SupportNot specifiedLimited / Varies
Context Awareness High Baseline
Installation ComplexityUnknownN/A

Frequently Asked Questions

What does this skill do?

Choose and implement Exa architecture patterns at different scales: direct search, cached search, and RAG pipeline. Use when designing Exa integrations, choosing between simple search and full RAG, or planning architecture for different traffic volumes. Trigger with phrases like "exa architecture", "exa blueprint", "how to structure exa", "exa RAG design", "exa at scale".

Where can I find the source code?

You can find the source code on GitHub using the link provided at the top of the page.

SKILL.md Source

# Exa Architecture Variants

## Overview
Three deployment architectures for Exa neural search at different scales. Each uses real Exa SDK methods: `search`, `searchAndContents`, `findSimilar`, `getContents`, and `answer`.

## Decision Matrix

| Factor | Direct Search | Cached Search | RAG Pipeline |
|--------|--------------|---------------|--------------|
| Volume | < 1K/day | 1K-50K/day | Any volume |
| Latency | 500-2000ms | ~50ms (cached) | 3-8s total |
| Use Case | Simple search UI | Content aggregation | AI answers with citations |
| Complexity | Low | Medium | High |
| Cache Required | No | Yes (Redis/LRU) | Yes |
| Exa Methods | `searchAndContents` | `searchAndContents` + cache | All methods |

## Instructions

### Variant 1: Direct Search Integration
**Best for:** Adding search to an existing app, < 1K queries/day.

```typescript
import Exa from "exa-js";
import express from "express";

const app = express();
const exa = new Exa(process.env.EXA_API_KEY);

// Simple search endpoint
app.get("/api/search", async (req, res) => {
  const query = req.query.q as string;
  if (!query) return res.status(400).json({ error: "q required" });

  try {
    const results = await exa.searchAndContents(query, {
      type: "auto",
      numResults: 5,
      text: { maxCharacters: 500 },
      highlights: { maxCharacters: 300, query },
    });

    res.json(results.results.map(r => ({
      title: r.title,
      url: r.url,
      snippet: r.highlights?.join(" ") || r.text?.substring(0, 200),
      score: r.score,
    })));
  } catch (err: any) {
    res.status(err.status || 500).json({ error: err.message });
  }
});
```

### Variant 2: Cached Search with Category Profiles
**Best for:** High-traffic search, 1K-50K queries/day, content discovery.

```typescript
import Exa from "exa-js";
import { LRUCache } from "lru-cache";

const exa = new Exa(process.env.EXA_API_KEY);
const cache = new LRUCache<string, any>({ max: 5000, ttl: 3600 * 1000 });

const PROFILES = {
  news: {
    type: "auto" as const,
    category: "news" as const,
    numResults: 10,
    text: { maxCharacters: 500 },
  },
  research: {
    type: "neural" as const,
    category: "research paper" as const,
    numResults: 10,
    text: { maxCharacters: 2000 },
    highlights: { maxCharacters: 500 },
  },
  companies: {
    type: "auto" as const,
    category: "company" as const,
    numResults: 10,
    text: { maxCharacters: 500 },
  },
};

async function cachedProfileSearch(
  query: string,
  profile: keyof typeof PROFILES
) {
  const key = `${query.toLowerCase()}:${profile}`;
  const cached = cache.get(key);
  if (cached) return cached;

  const results = await exa.searchAndContents(query, PROFILES[profile]);
  cache.set(key, results);
  return results;
}
```

### Variant 3: Full RAG Pipeline
**Best for:** AI-powered answers, research agents, 50K+ queries/day.

```typescript
import Exa from "exa-js";
import { LRUCache } from "lru-cache";

const exa = new Exa(process.env.EXA_API_KEY);
const contextCache = new LRUCache<string, any>({ max: 10000, ttl: 7200 * 1000 });

class ExaRAGPipeline {
  // Phase 1: Search for relevant sources
  async gatherContext(question: string, maxSources = 5) {
    const cacheKey = question.toLowerCase().trim();
    const cached = contextCache.get(cacheKey);
    if (cached) return cached;

    const results = await exa.searchAndContents(question, {
      type: "neural",
      numResults: maxSources,
      text: { maxCharacters: 2000 },
      highlights: { maxCharacters: 500, query: question },
    });

    contextCache.set(cacheKey, results);
    return results;
  }

  // Phase 2: Expand with similar content
  async expandContext(topResultUrl: string, numSimilar = 3) {
    return exa.findSimilarAndContents(topResultUrl, {
      numResults: numSimilar,
      text: { maxCharacters: 1500 },
      excludeSourceDomain: true,
    });
  }

  // Phase 3: Format for LLM context injection
  formatForLLM(results: any[]) {
    return results.map((r, i) =>
      `[Source ${i + 1}] ${r.title}\n` +
      `URL: ${r.url}\n` +
      `Content: ${r.text}\n` +
      `Key points: ${r.highlights?.join(" | ") || "N/A"}`
    ).join("\n\n---\n\n");
  }

  // Phase 4: Use Exa's built-in answer endpoint
  async getAnswer(question: string) {
    const answer = await exa.answer(question, { text: true });
    return {
      answer: answer.answer,
      sources: answer.results.map(r => ({
        title: r.title,
        url: r.url,
      })),
    };
  }

  // Full pipeline
  async research(question: string) {
    const context = await this.gatherContext(question, 5);

    // Expand with similar content from top result
    let expanded = { results: [] as any[] };
    if (context.results[0]?.url) {
      expanded = await this.expandContext(context.results[0].url);
    }

    const allResults = [...context.results, ...expanded.results];
    const llmContext = this.formatForLLM(allResults);

    return {
      context: llmContext,
      sourceCount: allResults.length,
      sources: allResults.map(r => ({ title: r.title, url: r.url, score: r.score })),
    };
  }
}
```

## Scaling Notes

| Architecture | 10 QPS Limit Strategy |
|-------------|----------------------|
| Direct | Natural limit: ~864K searches/day at full rate |
| Cached | 50% cache hit = ~1.7M effective searches/day |
| RAG Pipeline | 2-3 API calls per question; cache aggressively |

## Error Handling
| Issue | Cause | Solution |
|-------|-------|----------|
| Slow search in UI | No caching | Add LRU or Redis cache |
| Stale cached results | Long TTL | Reduce TTL for time-sensitive profiles |
| RAG hallucination | Poor source selection | Use highlights, increase numResults |
| High API costs | No query deduplication | Cache layer deduplicates identical queries |

## Resources
- [Exa API Documentation](https://docs.exa.ai)
- [Exa Contents Retrieval](https://docs.exa.ai/reference/contents-retrieval)
- [Exa Find Similar](https://docs.exa.ai/reference/find-similar-links)

## Next Steps
For reference architecture details, see `exa-reference-architecture`.

Related Skills

exa-reference-architecture

25
from ComeOnOliver/skillshub

Implement Exa reference architecture for search pipelines, RAG, and content discovery. Use when designing new Exa integrations, reviewing project structure, or establishing architecture standards for neural search applications. Trigger with phrases like "exa architecture", "exa project structure", "exa RAG pipeline", "exa reference design", "exa search pipeline".

evernote-reference-architecture

25
from ComeOnOliver/skillshub

Reference architecture for Evernote integrations. Use when designing system architecture, planning integrations, or building scalable Evernote applications. Trigger with phrases like "evernote architecture", "design evernote system", "evernote integration pattern", "evernote scale".

elevenlabs-reference-architecture

25
from ComeOnOliver/skillshub

Implement ElevenLabs reference architecture for production TTS/voice applications. Use when designing new ElevenLabs integrations, reviewing project structure, or building a scalable audio generation service. Trigger: "elevenlabs architecture", "elevenlabs project structure", "how to organize elevenlabs", "TTS service architecture", "elevenlabs design patterns", "voice API architecture".

documenso-reference-architecture

25
from ComeOnOliver/skillshub

Implement Documenso reference architecture with best-practice project layout. Use when designing new Documenso integrations, reviewing project structure, or establishing architecture standards for document signing applications. Trigger with phrases like "documenso architecture", "documenso best practices", "documenso project structure", "how to organize documenso".

deepgram-reference-architecture

25
from ComeOnOliver/skillshub

Implement Deepgram reference architecture for scalable transcription systems. Use when designing transcription pipelines, building production architectures, or planning Deepgram integration at scale. Trigger: "deepgram architecture", "transcription pipeline", "deepgram system design", "deepgram at scale", "enterprise deepgram", "deepgram queue".

databricks-reference-architecture

25
from ComeOnOliver/skillshub

Implement Databricks reference architecture with best-practice project layout. Use when designing new Databricks projects, reviewing architecture, or establishing standards for Databricks applications. Trigger with phrases like "databricks architecture", "databricks best practices", "databricks project structure", "how to organize databricks", "databricks layout".

customerio-reference-architecture

25
from ComeOnOliver/skillshub

Implement Customer.io enterprise reference architecture. Use when designing integration layers, event-driven architectures, or enterprise-grade Customer.io setups. Trigger: "customer.io architecture", "customer.io design", "customer.io enterprise", "customer.io integration pattern".

cursor-reference-architecture

25
from ComeOnOliver/skillshub

Reference architecture for Cursor IDE projects: directory structure, rules organization, indexing strategy, and team configuration patterns. Triggers on "cursor architecture", "cursor project structure", "cursor best practices", "cursor file structure".

coreweave-reference-architecture

25
from ComeOnOliver/skillshub

Reference architecture for CoreWeave GPU cloud deployments. Use when designing ML infrastructure, planning multi-model serving, or establishing CoreWeave deployment standards. Trigger with phrases like "coreweave architecture", "coreweave design", "coreweave infrastructure", "coreweave best practices".

cohere-reference-architecture

25
from ComeOnOliver/skillshub

Implement Cohere reference architecture with layered project layout for RAG and agents. Use when designing new Cohere integrations, reviewing project structure, or establishing architecture standards for Cohere API v2 applications. Trigger with phrases like "cohere architecture", "cohere project structure", "cohere layout", "organize cohere app", "cohere design pattern".

coderabbit-reference-architecture

25
from ComeOnOliver/skillshub

Implement CodeRabbit reference architecture with production-grade .coderabbit.yaml configuration. Use when designing review configuration for a new project, establishing team standards, or building a comprehensive review setup from scratch. Trigger with phrases like "coderabbit architecture", "coderabbit best practices", "coderabbit project structure", "coderabbit reference config", "coderabbit full setup".

clickup-reference-architecture

25
from ComeOnOliver/skillshub

Production architecture for ClickUp API v2 integrations with layered design, custom fields, time tracking, goals, and two-way sync patterns. Trigger: "clickup architecture", "clickup design", "clickup project structure", "clickup custom fields", "clickup time tracking", "clickup goals API".