agentdb-persistent-memory-patterns

AgentDB Persistent Memory Patterns operates on 3 fundamental principles:

28 stars

Best use case

agentdb-persistent-memory-patterns is best used when you need a repeatable AI agent workflow instead of a one-off prompt.

AgentDB Persistent Memory Patterns operates on 3 fundamental principles:

Teams using agentdb-persistent-memory-patterns should expect a more consistent output, faster repeated execution, less prompt rewriting.

When to use this skill

  • You want a reusable workflow that can be run more than once with consistent structure.

When not to use this skill

  • You only need a quick one-off answer and do not need a reusable workflow.
  • You cannot install or maintain the underlying files, dependencies, or repository context.

Installation

Claude Code / Cursor / Codex

$curl -o ~/.claude/skills/agentdb-persistent-memory-patterns/SKILL.md --create-dirs "https://raw.githubusercontent.com/DNYoussef/context-cascade/main/skills/platforms/agentdb-persistent-memory-patterns/SKILL.md"

Manual Installation

  1. Download SKILL.md from GitHub
  2. Place it in .claude/skills/agentdb-persistent-memory-patterns/SKILL.md inside your project
  3. Restart your AI agent — it will auto-discover the skill

How agentdb-persistent-memory-patterns Compares

Feature / Agentagentdb-persistent-memory-patternsStandard Approach
Platform SupportNot specifiedLimited / Varies
Context Awareness High Baseline
Installation ComplexityUnknownN/A

Frequently Asked Questions

What does this skill do?

AgentDB Persistent Memory Patterns operates on 3 fundamental principles:

Where can I find the source code?

You can find the source code on GitHub using the link provided at the top of the page.

SKILL.md Source

# AgentDB Persistent Memory Patterns



---

## LIBRARY-FIRST PROTOCOL (MANDATORY)

**Before writing ANY code, you MUST check:**

### Step 1: Library Catalog
- Location: `.claude/library/catalog.json`
- If match >70%: REUSE or ADAPT

### Step 2: Patterns Guide
- Location: `.claude/docs/inventories/LIBRARY-PATTERNS-GUIDE.md`
- If pattern exists: FOLLOW documented approach

### Step 3: Existing Projects
- Location: `D:\Projects\*`
- If found: EXTRACT and adapt

### Decision Matrix
| Match | Action |
|-------|--------|
| Library >90% | REUSE directly |
| Library 70-90% | ADAPT minimally |
| Pattern exists | FOLLOW pattern |
| In project | EXTRACT |
| No match | BUILD (add to library after) |

---

## Overview

Implement persistent memory patterns for AI agents using AgentDB - session memory, long-term storage, pattern learning, and context management for stateful agents, chat systems, and intelligent assistants.

## SOP Framework: 5-Phase Memory Implementation

### Phase 1: Design Memory Architecture (1-2 hours)
- Define memory schemas (episodic, semantic, procedural)
- Plan storage layers (short-term, working, long-term)
- Design retrieval mechanisms
- Configure persistence strategies

### Phase 2: Implement Storage Layer (2-3 hours)
- Create memory stores in AgentDB
- Implement session management
- Build long-term memory persistence
- Setup memory indexing

### Phase 3: Test Memory Operations (1-2 hours)
- Validate store/retrieve operations
- Test memory consolidation
- Verify pattern recognition
- Benchmark performance

### Phase 4: Optimize Performance (1-2 hours)
- Implement caching layers
- Optimize retrieval queries
- Add memory compression
- Performance tuning

### Phase 5: Document Patterns (1 hour)
- Create usage documentation
- Document memory patterns
- Write integration examples
- Generate API documentation

## Quick Start

```typescript
import { AgentDB, MemoryManager } from 'agentdb-memory';

// Initialize memory system
const memoryDB = new AgentDB({
  name: 'agent-memory',
  dimensions: 768,
  memory: {
    sessionTTL: 3600,
    consolidationInterval: 300,
    maxSessionSize: 1000
  }
});

const memoryManager = new MemoryManager({
  database: memoryDB,
  layers: ['episodic', 'semantic', 'procedural']
});

// Store memory
await memoryManager.store({
  type: 'episodic',
  content: 'User preferred dark theme',
  context: { userId: '123', timestamp: Date.now() }
});

// Retrieve memory
const memories = await memoryManager.retrieve({
  query: 'user preferences',
  type: 'episodic',
  limit: 10
});
```

## Memory Patterns

### Session Memory
```typescript
const session = await memoryManager.createSession('user-123');
await session.store('conversation', messageHistory);
await session.store('preferences', userPrefs);
const context = await session.getContext();
```

### Long-Term Storage
```typescript
await memoryManager.consolidate({
  from: 'working-memory',
  to: 'long-term-memory',
  strategy: 'importance-based'
});
```

### Pattern Learning
```typescript
const patterns = await memoryManager.learnPatterns({
  memory: 'episodic',
  algorithm: 'clustering',
  minSupport: 0.1
});
```

## Success Metrics

- Memory persists across agent restarts
- Retrieval latency < 50ms (p95)
- Pattern recognition accuracy > 85%
- Context maintained with 95% accuracy
- Memory consolidation working

## MCP Requirements

This skill operates using AgentDB's npm package and API only. No additional MCP servers required.

All AgentDB memory operations are performed through:
- npm CLI: `npx agentdb@latest`
- TypeScript/JavaScript API: `import { AgentDB, MemoryManager } from 'agentdb-memory'`

## Additional Resources

- Full documentation: SKILL.md
- Process guide: PROCESS.md
- AgentDB Memory Docs: https://agentdb.dev/docs/memory

## Core Principles

AgentDB Persistent Memory Patterns operates on 3 fundamental principles:

### Principle 1: Memory Layering - Separate Short-Term, Working, and Long-Term Storage

Memory systems mirror human cognition by organizing information across distinct temporal layers. Short-term memory handles immediate context (current conversation), working memory maintains active task state, and long-term memory consolidates important patterns for future retrieval.

In practice:
- Store conversation context in session memory with TTL expiration (1-hour default)
- Use working memory for active agent tasks and intermediate computation results
- Consolidate proven patterns and user preferences to long-term storage using importance-based criteria

### Principle 2: Pattern Learning - Extract Reusable Knowledge from Episodic Memory

Raw episodic memories (specific events) are valuable but incomplete. True intelligence emerges when systems detect patterns across episodes - recurring user preferences, common error scenarios, effective solution strategies - and encode them as semantic knowledge.

In practice:
- Run clustering algorithms on episodic memory to identify recurring patterns (min support threshold: 10%)
- Convert pattern clusters into semantic memory entries with confidence scores
- Use procedural memory to store proven solution workflows that can be replayed in similar contexts

### Principle 3: Performance-First Retrieval - Sub-50ms Latency with HNSW Indexing

Memory systems fail if retrieval is slower than computation. Production AI agents require sub-50ms memory access to maintain real-time responsiveness, necessitating HNSW indexing, quantization, and aggressive caching strategies.

In practice:
- Build HNSW indexes on all memory stores during initialization (M=16, efConstruction=200)
- Apply product quantization for 4x memory reduction without accuracy loss
- Implement LRU caching with 70%+ hit rate for frequently accessed memories

## Common Anti-Patterns

| Anti-Pattern | Problem | Solution |
|--------------|---------|----------|
| **Memory Hoarder - Store Everything Forever** | Unbounded storage growth leads to slow retrieval, high costs, and context pollution. Agents retrieve irrelevant memories from 6 months ago. | Implement aggressive TTL policies (1-hour sessions, 30-day working memory, importance-based long-term retention). Use consolidation strategies to compress episodic memories into semantic patterns. |
| **Flat Memory - Single Storage Layer** | All memories treated equally creates retrieval chaos. No distinction between current conversation context and learned patterns from last year. | Use 3-layer architecture: session (ephemeral), working (task-scoped), long-term (consolidated). Apply different retrieval strategies per layer (recency for session, relevance for semantic). |
| **Retrieval Thrashing - Query Every Memory Store on Every Request** | Exhaustive searches across all memory layers cause latency spikes (200ms+ retrieval). Agents spend more time remembering than acting. | Use cascading retrieval: session first (fastest), semantic second (indexed), episodic last (cold storage). Implement query routing based on memory type and recency. Cache hot paths. |

## Conclusion

AgentDB Persistent Memory Patterns transforms stateless AI agents into intelligent systems with genuine memory. By implementing layered storage (session, working, long-term), pattern learning algorithms, and performance-optimized retrieval, you enable agents to accumulate knowledge across interactions rather than starting from zero on every request. The 5-phase SOP ensures systematic implementation from architecture design through performance tuning, with success validated through sub-50ms retrieval latency and 95%+ context accuracy.

This skill is essential when building chat systems requiring conversation history, intelligent assistants that learn user preferences over time, or multi-agent systems coordinating through shared memory. The pattern learning capabilities distinguish AgentDB from basic vector databases - instead of merely storing embeddings, it actively extracts reusable knowledge from experience. When agents can remember what worked before, recall user preferences without re-asking, and apply proven patterns to new problems, they transition from tools to true collaborators.

The performance requirements are non-negotiable for production systems. Users abandon agents that "think" for 500ms between responses. By combining HNSW indexing, quantization, and caching strategies, you achieve both intelligent memory and real-time responsiveness - the foundation for AI systems that feel genuinely aware.

Related Skills

reasoningbank-with-agentdb

28
from DNYoussef/context-cascade

Implement ReasoningBank adaptive learning with AgentDBs 150x faster vector database. Includes trajectory tracking, verdict judgment, memory distillation, and pattern recognition. Use when building self-learning agents, optimizing decision-making, or implementing experience replay systems.

reasoningbank-adaptive-learning-with-agentdb

28
from DNYoussef/context-cascade

---

agentdb-vector-search-optimization

28
from DNYoussef/context-cascade

AgentDB Vector Search Optimization operates on 3 fundamental principles:

agentdb-semantic-vector-search

28
from DNYoussef/context-cascade

---

agentdb-reinforcement-learning-training

28
from DNYoussef/context-cascade

AgentDB Reinforcement Learning Training operates on 3 fundamental principles:

agentdb-performance-optimization

28
from DNYoussef/context-cascade

Apply quantization to reduce memory by 4-32x. Enable HNSW indexing for 150x faster search. Configure caching strategies and implement batch operations. Use when optimizing memory usage, improving search speed, or scaling to millions of vectors. Deploy these optimizations to achieve 12,500x performance gains.

agentdb-learning-plugins

28
from DNYoussef/context-cascade

Create AI learning plugins using AgentDBs 9 reinforcement learning algorithms. Train Decision Transformer, Q-Learning, SARSA, and Actor-Critic models. Deploy these plugins to build self-learning agents, implement RL workflows, and optimize agent behavior through experience. Apply offline RL for safe learning from logged data.

agentdb-advanced-features

28
from DNYoussef/context-cascade

Master advanced AgentDB features including QUIC synchronization, multi-database management, custom distance metrics, hybrid search, and distributed systems integration. Use when building distributed AI systems, multi-agent coordination, or advanced vector search applications.

/*============================================================================*/

28
from DNYoussef/context-cascade

/* SKILL SKILL :: VERILINGUA x VERIX EDITION */

web-scraping

28
from DNYoussef/context-cascade

Structured data extraction from web pages using claude-in-chrome MCP with sequential-thinking planning. Focus on READ operations, data transformation, and pagination handling for multi-page extraction.

visual-testing

28
from DNYoussef/context-cascade

Screenshot-based visual comparison and regression testing using claude-in-chrome MCP. Captures, compares, and validates UI states to detect layout shifts, visual bugs, and design regressions across viewports.

reflect

28
from DNYoussef/context-cascade

Extract learnings from session corrections and patterns, update skill files with persistent memory. Implements Loop 1.5 - per-session micro-learning between execution and meta-optimization.