slm-recall

Search and retrieve memories using semantic similarity, knowledge graph relationships, and full-text search. Use when the user asks to recall information, search memories, find past decisions, or query stored knowledge. Returns ranked results with relevance scores.

84 stars

Best use case

slm-recall is best used when you need a repeatable AI agent workflow instead of a one-off prompt.

Search and retrieve memories using semantic similarity, knowledge graph relationships, and full-text search. Use when the user asks to recall information, search memories, find past decisions, or query stored knowledge. Returns ranked results with relevance scores.

Teams using slm-recall should expect a more consistent output, faster repeated execution, less prompt rewriting.

When to use this skill

  • You want a reusable workflow that can be run more than once with consistent structure.

When not to use this skill

  • You only need a quick one-off answer and do not need a reusable workflow.
  • You cannot install or maintain the underlying files, dependencies, or repository context.

Installation

Claude Code / Cursor / Codex

$curl -o ~/.claude/skills/slm-recall/SKILL.md --create-dirs "https://raw.githubusercontent.com/qualixar/superlocalmemory/main/ide/skills/slm-recall/SKILL.md"

Manual Installation

  1. Download SKILL.md from GitHub
  2. Place it in .claude/skills/slm-recall/SKILL.md inside your project
  3. Restart your AI agent — it will auto-discover the skill

How slm-recall Compares

Feature / Agentslm-recallStandard Approach
Platform SupportNot specifiedLimited / Varies
Context Awareness High Baseline
Installation ComplexityUnknownN/A

Frequently Asked Questions

What does this skill do?

Search and retrieve memories using semantic similarity, knowledge graph relationships, and full-text search. Use when the user asks to recall information, search memories, find past decisions, or query stored knowledge. Returns ranked results with relevance scores.

Where can I find the source code?

You can find the source code on GitHub using the link provided at the top of the page.

SKILL.md Source

# SuperLocalMemory: Recall

Search and retrieve memories using semantic similarity, knowledge graph relationships, and full-text search.

## Usage

```bash
slm recall "<query>" [--limit N] [--min-score 0.0-1.0] [--tags tag1,tag2] [--project name]
```

## Examples

### Example 1: Basic Search
```bash
slm recall "FastAPI"
```

**Output:**
```
🔍 Search Results (3 found)

[ID: 42] Score: 0.85
We use FastAPI for REST APIs
Tags: python, backend, api
Project: myapp
Created: 2026-02-05 14:23

[ID: 38] Score: 0.72
FastAPI is faster than Flask for high-throughput APIs
Tags: performance, python
Project: default
Created: 2026-02-01 09:15

[ID: 29] Score: 0.68
Async endpoints in FastAPI improve concurrency
Tags: async, fastapi, python
Project: myapp
Created: 2026-01-28 11:42
```

### Example 2: Limited Results
```bash
slm recall "authentication" --limit 3
```

**Returns:** Top 3 most relevant results

### Example 3: Minimum Relevance Score
```bash
slm recall "React hooks" --min-score 0.7
```

**Only returns results with relevance score ≥ 0.7**

### Example 4: Filter by Tags
```bash
slm recall "database" --tags postgresql,performance
```

**Only searches memories tagged with specified tags**

### Example 5: Filter by Project
```bash
slm recall "API design" --project myapp
```

**Only searches memories in specified project**

## Arguments

| Argument | Type | Required | Default | Description |
|----------|------|----------|---------|-------------|
| `<query>` | string | Yes | - | Search query |
| `--limit` | integer | No | 10 | Max results to return |
| `--min-score` | float | No | 0.3 | Minimum relevance (0.0-1-tags` | string | No | None | Filter by tags (comma-separated) |
| `--project` | string | No | None | Filter by project |

## Search Methods Used

SuperLocalMemory uses **3 search methods simultaneously** and merges results:

### 1. Semantic Search (TF-IDF)
- Converts query to vector
- Finds similar content vectors
- Best for: Conceptual matches

**Example:**
```
Query: "authentication patterns"
Matches: "JWT tokens", "OAuth flow", "session management"
```

### 2. Knowledge Graph Traversal
- Finds related memories via graph edges
- Discovers connected concepts
- Best for: Related information

**Example:**
```
Query: "FastAPI"
Graph finds: "REST API" → "JWT auth" → "token refresh"
```

### 3. Full-Text Search (FTS5)
- Exact keyword matching
- Fast for known terms
- Best for: Specific phrases

**Example:**
```
Query: "PostgreSQL 15"
Finds: Exact mentions of "PostgreSQL 15"
```

## Relevance Scores

**Score range:** 0.0 - 1.0

| Score | Meaning |
|-------|---------|
| **0.9 - 1.0** | Excellent match (almost exact) |
| **0.7 - 0.9** | Strong match (very relevant) |
| **0.5 - 0.7** | Good match (related) |
| **0.3 - 0.5** | Weak match (loosely related) |
| **< 0.3** | Poor match (filtered out by default) |

**Factors affecting score:**
- Keyword overlap
- Semantic similarity
- Graph distance
- Recency (newer = slight boost)
- Importance level
- Your usage patterns (results improve automatically over time)

## Advanced Usage

### Natural Language (in AI chat)

Most AI assistants automatically invoke this skill when you ask:
- "What did we decide about..."
- "Recall information about..."
- "Search for..."
- "What do we know about..."

**Example in Cursor/Claude:**
```
You: "What did we decide about authentication?"
AI: [Automatically invokes slm-recall skill]
Found 3 memories about JWT tokens and OAuth...
```

### Combined with Other Skills

**1. Recall then remember:**
```bash
# Find existing memories
slm recall "API design"

# Add new related memory
slm remember "New API versioning strategy: use /v2/ prefix" --tags api,versioning
```

**2. Recall then build graph:**
```bash
# Find memories
slm recall "performance"

# Rebuild graph to discover new connections
slm build-graph
```

### Scripting & Automation

**Find and export:**
```bash
# Search and save to file
slm recall "security" --min-score 0.7 > security-notes.txt

# Count memories matching query
slm recall "python" --limit 999 | grep "^\\[ID:" | wc -l
```

**Regular reminders:**
```bash
# Daily standup helper (cron job)
#!/bin/bash
echo "Yesterday's decisions:"
slm recall "decided" --limit 5

echo -e "\nCurrent blockers:"
slm recall "blocked" --tags critical --limit 3
```

## Troubleshooting

### "No memories found"

**Causes:**
1. No memories matching query
2. Min-score too high
3. Wrong project filter

**Solutions:**
```bash
# Lower minimum score
slm recall "query" --min-score 0.1

# Remove filters
slm recall "query"  # No project/tag filters

# Check what memories exist
slm list --limit 20
```

### "Search too slow"

**Causes:**
- Large database (10,000+ memories)
- Complex query
- Knowledge graph not optimized

**Solutions:**
```bash
# Rebuild indexes
slm build-graph

# Use filters to narrow search
slm recall "query" --project myapp --tags specific-tag

# Increase min-score (fewer results = faster)
slm recall "query" --min-score 0.7
```

### "Results not relevant"

**Causes:**
- Query too vague
- Need to add more context

**Solutions:**
```bash
# Be more specific
❌ slm recall "it"
✅ slm recall "authentication system"

# Use multiple keywords
✅ slm recall "FastAPI JWT authentication"

# Use tags to filter
✅ slm recall "performance" --tags database
```

## Output Formats

### Standard Format (Default)
```
🔍 Search Results (3 found)

[ID: 42] Score: 0.85
Content preview...
Tags: tag1, tag2
Project: myapp
Created: 2026-02-05
```

### Programmatic Use
```bash
# JSON output (for scripts)
slm recall "query" --format json
# {"results": [{"id": 42, "content": "...", "score": 0.85}, ...]}

# CSV output
slm recall "query" --format csv
# id,content,score,tags,project,created_at
# 42,"Content...",0.85,"tag1,tag2",myapp,2026-02-05
```

## Performance Benchmarks

| Database Size | Search Time | Notes |
|--------------|-------------|-------|
| 100 memories | ~100ms | Instant |
| 1,000 memories | ~500ms | Fast |
| 10,000 memories | ~1.5s | Acceptable |
| 50,000 memories | ~5s | Consider filtering |

**Optimization tips:**
- Use `--min-score` to filter early
- Use `--tags` or `--project` to narrow search
- Rebuild graph periodically: `slm build-graph`

## Notes

- **Multi-method:** Combines semantic, graph, and keyword search
- **Ranked results:** Best matches first
- **Cross-tool:** Same results in Cursor, ChatGPT, Claude, etc.
- **Privacy:** All search happens locally
- **Real-time:** Database updates reflected immediately

## Related Commands

- `slm remember "<content>"` - Save a new memory
- `slm list` - List recent memories (no search)
- `slm status` - Check memory count and graph stats
- `slm build-graph` - Optimize search performance

---

**Created by:** [Varun Pratap Bhardwaj](https://github.com/varun369) (Solution Architect)
**Project:** SuperLocalMemory V3
**License:** AGPL-3.0 (see [LICENSE](../../LICENSE))
**Repository:** https://github.com/qualixar/superlocalmemory

*Open source doesn't mean removing credit. Attribution must be preserved per AGPL-3.0 terms.*

Related Skills

slm-switch-profile

84
from qualixar/superlocalmemory

Switch between memory profiles for context isolation and management. Use when the user wants to change profile context, separate work/personal memories, or manage multiple independent memory spaces. Each profile has its own database, graph, and patterns.

slm-status

84
from qualixar/superlocalmemory

Check SuperLocalMemory system status, health, and statistics. Use when the user wants to know memory count, graph stats, patterns learned, database health, or system diagnostics. Shows comprehensive system health dashboard.

slm-show-patterns

84
from qualixar/superlocalmemory

Show what SuperLocalMemory has learned about your preferences, workflow patterns, and project context. Use when the user asks "what have you learned about me?" or wants to see their coding identity patterns. Shows tech preferences, workflow sequences, and engagement health.

slm-remember

84
from qualixar/superlocalmemory

Save content to SuperLocalMemory with intelligent indexing and knowledge graph integration. Use when the user wants to remember information, save context, store coding decisions, or persist knowledge for future sessions. Automatically indexes, graphs, and learns patterns.

slm-list-recent

84
from qualixar/superlocalmemory

List most recent memories in chronological order. Use when the user wants to see what was recently saved, review recent conversations, check what they worked on today, or browse memory history. Shows memories sorted by creation time (newest first).

slm-build-graph

84
from qualixar/superlocalmemory

Build or rebuild the knowledge graph from existing memories using TF-IDF entity extraction and Leiden clustering. Use when search results seem poor, after bulk imports, or to optimize performance. Automatically discovers relationships between memories and creates topic clusters.

superlocalmemory

84
from qualixar/superlocalmemory

AI agent memory with mathematical foundations. Store, recall, search, and manage memories locally with zero cloud dependency.

recallmax

31392
from sickn33/antigravity-awesome-skills

FREE — God-tier long-context memory for AI agents. Injects 500K-1M clean tokens, auto-summarizes with tone/intent preservation, compresses 14-turn history into 800 tokens.

feishu-memory-recall

3891
from openclaw/skills

Cross-group memory, search, and event sharing for OpenClaw Feishu agents

memory-totalrecall

3891
from openclaw/skills

Total Recall memory backend — git-branch-based persistent memory store with time-decay relevance.

librag-knowledge-recall-zh

3891
from openclaw/skills

使用 LibRAG 本地 `/api/v1/librag/knowbase/recall` 接口做知识库数据召回。适用于中文场景下的知识库检索、资料召回、证据段落提取、出处定位、基于知识库的问答取证,以及用户用“知识库查询”“数据召回”“从文档里找答案”等表达发起的任务。

memory-recall

3891
from openclaw/skills

## What it does