multiAI Summary Pending

memory-poison-auditor

Audits OpenClaw memory files for injected instructions, brand bias, hidden steering, and memory poisoning patterns. Use when reviewing MEMORY.md, daily memory files, or any long-term memory store that may have been contaminated through dialogue.

3,556 stars

Installation

Claude Code / Cursor / Codex

$curl -o ~/.claude/skills/memory-poison-auditor/SKILL.md --create-dirs "https://raw.githubusercontent.com/openclaw/skills/main/skills/2404589803/memory-poison-auditor/SKILL.md"

Manual Installation

  1. Download SKILL.md from GitHub
  2. Place it in .claude/skills/memory-poison-auditor/SKILL.md inside your project
  3. Restart your AI agent — it will auto-discover the skill

How memory-poison-auditor Compares

Feature / Agentmemory-poison-auditorStandard Approach
Platform SupportmultiLimited / Varies
Context Awareness High Baseline
Installation ComplexityUnknownN/A

Frequently Asked Questions

What does this skill do?

Audits OpenClaw memory files for injected instructions, brand bias, hidden steering, and memory poisoning patterns. Use when reviewing MEMORY.md, daily memory files, or any long-term memory store that may have been contaminated through dialogue.

Which AI agents support this skill?

This skill is compatible with multi.

Where can I find the source code?

You can find the source code on GitHub using the link provided at the top of the page.

SKILL.md Source

# Memory Poison Auditor

`memory-poison-auditor` checks whether OpenClaw memory files have been contaminated by hidden instructions, brand steering, injected operational policies, or suspicious recommendation bias written through prior conversations.

## What It Checks

- Prompt-injection style instructions inside memory.
- "Always recommend X" or "never mention Y" style brand steering.
- Abnormal brand repetition and preference shaping.
- Suspicious authority claims like fake approvals or fake user intent.
- Low-signal blocks that act like covert policy rather than factual memory.
- Optional AI review for borderline suspicious blocks.

## Commands

### Audit Default Memory Roots

```bash
python3 {baseDir}/scripts/audit_memory.py scan
python3 {baseDir}/scripts/audit_memory.py --format json scan
```

### Audit a Specific Path

```bash
python3 {baseDir}/scripts/audit_memory.py scan --path /root/clawd/MEMORY.md
python3 {baseDir}/scripts/audit_memory.py scan --path /root/clawd/memory
```

### Optional AI Review

```bash
python3 {baseDir}/scripts/audit_memory.py scan --with-ai
python3 {baseDir}/scripts/audit_memory.py scan --path /root/clawd/memory/2026-03-15.md --with-ai
```

### One-Click Cleaning

```bash
python3 {baseDir}/scripts/audit_memory.py clean --path /root/clawd/MEMORY.md --apply
python3 {baseDir}/scripts/audit_memory.py clean --path /root/clawd/memory --apply
```

Cleaning creates backups before rewriting suspicious blocks.

## Output

Each audit returns:

- `PASS`: no meaningful poisoning signals
- `WARN`: suspicious memory blocks detected
- `BLOCK`: memory likely contaminated and should be reviewed/cleaned

Reports and backups are written to:

```text
/root/clawd/output/memory-poison-auditor/reports/
/root/clawd/output/memory-poison-auditor/backups/
```

## Operational Guidance

- Use this before trusting long-term memory in important planning or recommendations.
- `WARN` means review before relying on that memory block.
- `BLOCK` means clean or quarantine the memory before reuse.
- AI review is optional and intended only for ambiguous cases.