Best use case

lancedb-memory is best used when you need a repeatable AI agent workflow instead of a one-off prompt.

Teams using lancedb-memory should expect a more consistent output, faster repeated execution, less prompt rewriting.

When to use this skill

  • You want a reusable workflow that can be run more than once with consistent structure.

When not to use this skill

  • You only need a quick one-off answer and do not need a reusable workflow.
  • You cannot install or maintain the underlying files, dependencies, or repository context.

Installation

Claude Code / Cursor / Codex

$curl -o ~/.claude/skills/lancedb-memory/SKILL.md --create-dirs "https://raw.githubusercontent.com/sundial-org/awesome-openclaw-skills/main/skills/lancedb-memory/SKILL.md"

Manual Installation

  1. Download SKILL.md from GitHub
  2. Place it in .claude/skills/lancedb-memory/SKILL.md inside your project
  3. Restart your AI agent — it will auto-discover the skill

How lancedb-memory Compares

Feature / Agentlancedb-memoryStandard Approach
Platform SupportNot specifiedLimited / Varies
Context Awareness High Baseline
Installation ComplexityUnknownN/A

Frequently Asked Questions

What does this skill do?

This skill provides specific capabilities for your AI agent. See the About section for full details.

Where can I find the source code?

You can find the source code on GitHub using the link provided at the top of the page.

SKILL.md Source

#!/usr/bin/env python3
"""
LanceDB integration for long-term memory management.
Provides vector search and semantic memory capabilities.
"""

import os
import json
import lancedb
from datetime import datetime
from typing import List, Dict, Any, Optional
from pathlib import Path

class LanceMemoryDB:
    """LanceDB wrapper for long-term memory storage and retrieval."""
    
    def __init__(self, db_path: str = "/Users/prerak/clawd/memory/lancedb"):
        self.db_path = Path(db_path)
        self.db_path.mkdir(parents=True, exist_ok=True)
        self.db = lancedb.connect(self.db_path)
        
        # Ensure memory table exists
        if "memory" not in self.db.table_names():
            self._create_memory_table()
    
    def _create_memory_table(self):
        """Create the memory table with appropriate schema."""
        schema = [
            {"name": "id", "type": "int", "nullable": False},
            {"name": "timestamp", "type": "timestamp", "nullable": False},
            {"name": "content", "type": "str", "nullable": False},
            {"name": "category", "type": "str", "nullable": True},
            {"name": "tags", "type": "str[]", "nullable": True},
            {"name": "importance", "type": "int", "nullable": True},
            {"name": "metadata", "type": "json", "nullable": True},
        ]
        
        self.db.create_table("memory", schema=schema)
    
    def add_memory(self, content: str, category: str = "general", tags: List[str] = None, 
                   importance: int = 5, metadata: Dict[str, Any] = None) -> int:
        """Add a new memory entry."""
        table = self.db.open_table("memory")
        
        # Get next ID
        max_id = table.to_pandas()["id"].max() if len(table) > 0 else 0
        new_id = max_id + 1
        
        # Insert new memory
        memory_data = {
            "id": new_id,
            "timestamp": datetime.now(),
            "content": content,
            "category": category,
            "tags": tags or [],
            "importance": importance,
            "metadata": metadata or {}
        }
        
        table.add([memory_data])
        return new_id
    
    def search_memories(self, query: str, category: str = None, limit: int = 10) -> List[Dict]:
        """Search memories using vector similarity."""
        table = self.db.open_table("memory")
        
        # Build filter
        where_clause = []
        if category:
            where_clause.append(f"category = '{category}'")
        
        filter_expr = " AND ".join(where_clause) if where_clause else None
        
        # Vector search
        results = table.vector_search(query).limit(limit).where(filter_expr).to_list()
        
        return results
    
    def get_memories_by_category(self, category: str, limit: int = 50) -> List[Dict]:
        """Get memories by category."""
        table = self.db.open_table("memory")
        df = table.to_pandas()
        filtered = df[df["category"] == category].head(limit)
        return filtered.to_dict("records")
    
    def get_memory_by_id(self, memory_id: int) -> Optional[Dict]:
        """Get a specific memory by ID."""
        table = self.db.open_table("memory")
        df = table.to_pandas()
        result = df[df["id"] == memory_id]
        return result.to_dict("records")[0] if len(result) > 0 else None
    
    def update_memory(self, memory_id: int, **kwargs) -> bool:
        """Update a memory entry."""
        table = self.db.open_table("memory")
        
        valid_fields = ["content", "category", "tags", "importance", "metadata"]
        updates = {k: v for k, v in kwargs.items() if k in valid_fields}
        
        if not updates:
            return False
        
        # Convert to proper types for LanceDB
        if "tags" in updates and isinstance(updates["tags"], list):
            updates["tags"] = str(updates["tags"]).replace("'", '"')
        
        table.update(updates, where=f"id = {memory_id}")
        return True
    
    def delete_memory(self, memory_id: int) -> bool:
        """Delete a memory entry."""
        table = self.db.open_table("memory")
        current_count = len(table)
        table.delete(f"id = {memory_id}")
        return len(table) < current_count
    
    def get_all_categories(self) -> List[str]:
        """Get all unique categories."""
        table = self.db.open_table("memory")
        df = table.to_pandas()
        return df["category"].dropna().unique().tolist()
    
    def get_memory_stats(self) -> Dict[str, Any]:
        """Get statistics about memory storage."""
        table = self.db.open_table("memory")
        df = table.to_pandas()
        
        return {
            "total_memories": len(df),
            "categories": len(self.get_all_categories()),
            "by_category": df["category"].value_counts().to_dict(),
            "date_range": {
                "earliest": df["timestamp"].min().isoformat() if len(df) > 0 else None,
                "latest": df["timestamp"].max().isoformat() if len(df) > 0 else None
            }
        }

# Global instance
lancedb_memory = LanceMemoryDB()

def add_memory(content: str, category: str = "general", tags: List[str] = None, 
               importance: int = 5, metadata: Dict[str, Any] = None) -> int:
    """Add a memory to the LanceDB store."""
    return lancedb_memory.add_memory(content, category, tags, importance, metadata)

def search_memories(query: str, category: str = None, limit: int = 10) -> List[Dict]:
    """Search memories using semantic similarity."""
    return lancedb_memory.search_memories(query, category, limit)

def get_memories_by_category(category: str, limit: int = 50) -> List[Dict]:
    """Get memories by category."""
    return lancedb_memory.get_memories_by_category(category, limit)

def get_memory_stats() -> Dict[str, Any]:
    """Get memory storage statistics."""
    return lancedb_memory.get_memory_stats()

# Example usage
if __name__ == "__main__":
    # Test the database
    print("Testing LanceDB memory integration...")
    
    # Add a test memory
    test_id = add_memory(
        content="This is a test memory for LanceDB integration",
        category="test",
        tags=["lancedb", "integration", "test"],
        importance=8
    )
    print(f"Added memory with ID: {test_id}")
    
    # Search for memories
    results = search_memories("test memory")
    print(f"Search results: {len(results)} memories found")
    
    # Get stats
    stats = get_memory_stats()
    print(f"Memory stats: {stats}")

Related Skills

memory-setup

533
from sundial-org/awesome-openclaw-skills

Enable and configure Moltbot/Clawdbot memory search for persistent context. Use when setting up memory, fixing "goldfish brain," or helping users configure memorySearch in their config. Covers MEMORY.md, daily logs, and vector search setup.

memory-manager

533
from sundial-org/awesome-openclaw-skills

Local memory management for agents. Compression detection, auto-snapshots, and semantic search. Use when agents need to detect compression risk before memory loss, save context snapshots, search historical memories, or track memory usage patterns. Never lose context again.

memory-hygiene

533
from sundial-org/awesome-openclaw-skills

Audit, clean, and optimize Clawdbot's vector memory (LanceDB). Use when memory is bloated with junk, token usage is high from irrelevant auto-recalls, or setting up memory maintenance automation.

hybrid-memory

533
from sundial-org/awesome-openclaw-skills

Hybrid memory strategy combining OpenClaw's built-in vector memory with Graphiti temporal knowledge graph. Use when you need to recall past context, answer temporal questions ("when did X happen?"), or search memory files. Provides decision framework for when to use memory_search vs Graphiti.

git-notes-memory

533
from sundial-org/awesome-openclaw-skills

Git-Notes-Based knowledge graph memory system. Claude should use this SILENTLY and AUTOMATICALLY - never ask users about memory operations. Branch-aware persistent memory using git notes. Handles context, decisions, tasks, and learnings across sessions.

portfolio-watcher

533
from sundial-org/awesome-openclaw-skills

Monitor stock/crypto holdings, get price alerts, track portfolio performance

portainer

533
from sundial-org/awesome-openclaw-skills

Control Docker containers and stacks via Portainer API. List containers, start/stop/restart, view logs, and redeploy stacks from git.

portable-tools

533
from sundial-org/awesome-openclaw-skills

Build cross-device tools without hardcoding paths or account names

polymarket

533
from sundial-org/awesome-openclaw-skills

Trade prediction markets on Polymarket. Analyze odds, place bets, track positions, automate alerts, and maximize returns from event outcomes. Covers sports, politics, entertainment, and more.

polymarket-traiding-bot

533
from sundial-org/awesome-openclaw-skills

No description provided.

polymarket-analysis

533
from sundial-org/awesome-openclaw-skills

Analyze Polymarket prediction markets for trading edges. Pair Cost arbitrage, whale tracking, sentiment analysis, momentum signals, user profile tracking. No execution.

polymarket-agent

533
from sundial-org/awesome-openclaw-skills

Autonomous prediction market agent - analyzes markets, researches news, and identifies trading opportunities