self-improvement
Captures learnings, errors, and corrections to enable continuous improvement. Use when: (1) A command or operation fails unexpectedly, (2) User corrects Claude ('No, that's wrong...', 'Actually...'), (3) User requests a capability that doesn't exist, (4) An external API or tool fails, (5) Claude realizes its knowledge is outdated or incorrect, (6) A better approach is discovered for a recurring task. Also review learnings before major tasks.
About this skill
This skill provides AI agents with a robust internal mechanism for continuous self-improvement. It systematically captures and logs various types of operational events, including unexpected command failures, direct user corrections, requests for capabilities that don't exist, external API or tool malfunctions, instances of outdated internal knowledge, and the discovery of more efficient approaches for recurring tasks. These incidents are stored in structured markdown files within a `.learnings` directory, creating a persistent memory of the agent's operational history. By maintaining these detailed logs, the agent can later process them to identify recurring issues, integrate specific fixes, update its internal knowledge base, and refine its overall behavior. Crucially, important insights can be promoted to higher-level project memory files like `CLAUDE.md`, `AGENTS.md`, `TOOLS.md`, or `SOUL.md`, ensuring that individual operational lessons contribute to the agent's cumulative growth and intelligence. This transforms transient experiences into actionable intelligence, enhancing the agent's long-term capabilities. The self-improvement skill is essential for developing resilient and adaptable AI agents. It allows them to learn from their mistakes and user feedback, reducing the need for constant manual debugging and oversight. This leads to more reliable performance, better problem-solving abilities, and a reduced likelihood of repeating past errors, ultimately fostering a more capable and autonomous AI assistant.
Best use case
The primary use case is to empower AI coding agents to learn and evolve from their interactions and operational experiences. Developers and users benefit by having agents that become progressively more robust, accurate, and autonomous over time, reducing the need for manual intervention and improving the quality of AI-generated work.
Captures learnings, errors, and corrections to enable continuous improvement. Use when: (1) A command or operation fails unexpectedly, (2) User corrects Claude ('No, that's wrong...', 'Actually...'), (3) User requests a capability that doesn't exist, (4) An external API or tool fails, (5) Claude realizes its knowledge is outdated or incorrect, (6) A better approach is discovered for a recurring task. Also review learnings before major tasks.
The agent will maintain a structured, persistent record of its operational experiences, errors, and learnings, leading to continuous self-improvement and more reliable future performance.
Practical example
Example input
An external API call to `https://api.example.com/data` failed with a 403 Forbidden error, preventing the agent from fetching required data during a task.
Example output
The agent would log this event to `.learnings/ERRORS.md`, e.g., `## 2023-10-27 10:45:00 - API Failure: Data Fetch - Details: GET https://api.example.com/data failed with 403 Forbidden. - Context: Attempted to retrieve user profile data for report generation.`
When to use this skill
- When an internal command or external operation by the agent fails unexpectedly.
- When a user provides direct corrective feedback to the agent ('No, that's wrong...', 'Actually...').
- When the agent identifies its internal knowledge is outdated or discovers a better approach for a task.
- When an external API or tool integration utilized by the agent encounters an error.
When not to use this skill
- When simply executing a routine task that does not involve error handling or learning opportunities.
- When an immediate, critical fix is required and logging would add unnecessary delay.
- In highly transient, one-off interactions where the long-term benefit of logging is minimal.
- If the agent's operating environment lacks persistent local storage for the learning logs.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/xiaoding-self-improving-agent/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How self-improvement Compares
| Feature / Agent | self-improvement | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | easy | N/A |
Frequently Asked Questions
What does this skill do?
Captures learnings, errors, and corrections to enable continuous improvement. Use when: (1) A command or operation fails unexpectedly, (2) User corrects Claude ('No, that's wrong...', 'Actually...'), (3) User requests a capability that doesn't exist, (4) An external API or tool fails, (5) Claude realizes its knowledge is outdated or incorrect, (6) A better approach is discovered for a recurring task. Also review learnings before major tasks.
How difficult is it to install?
The installation complexity is rated as easy. You can find the installation instructions above.
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
Related Guides
AI Agents for Coding
Browse AI agent skills for coding, debugging, testing, refactoring, code review, and developer workflows across Claude, Cursor, and Codex.
Best AI Skills for Claude
Explore the best AI skills for Claude and Claude Code across coding, research, workflow automation, documentation, and agent operations.
Cursor vs Codex for AI Workflows
Compare Cursor and Codex for AI coding workflows, repository assistance, debugging, refactoring, and reusable developer skills.
SKILL.md Source
# Self-Improvement Skill
Log learnings and errors to markdown files for continuous improvement. Coding agents can later process these into fixes, and important learnings get promoted to project memory.
## Quick Reference
| Situation | Action |
|-----------|--------|
| Command/operation fails | Log to `.learnings/ERRORS.md` |
| User corrects you | Log to `.learnings/LEARNINGS.md` with category `correction` |
| User wants missing feature | Log to `.learnings/FEATURE_REQUESTS.md` |
| API/external tool fails | Log to `.learnings/ERRORS.md` with integration details |
| Knowledge was outdated | Log to `.learnings/LEARNINGS.md` with category `knowledge_gap` |
| Found better approach | Log to `.learnings/LEARNINGS.md` with category `best_practice` |
| Simplify/Harden recurring patterns | Log/update `.learnings/LEARNINGS.md` with `Source: simplify-and-harden` and a stable `Pattern-Key` |
| Similar to existing entry | Link with `**See Also**`, consider priority bump |
| Broadly applicable learning | Promote to `CLAUDE.md`, `AGENTS.md`, and/or `.github/copilot-instructions.md` |
| Workflow improvements | Promote to `AGENTS.md` (OpenClaw workspace) |
| Tool gotchas | Promote to `TOOLS.md` (OpenClaw workspace) |
| Behavioral patterns | Promote to `SOUL.md` (OpenClaw workspace) |
## OpenClaw Setup (Recommended)
OpenClaw is the primary platform for this skill. It uses workspace-based prompt injection with automatic skill loading.
### Installation
**Via ClawdHub (recommended):**
```bash
clawdhub install self-improving-agent
```
**Manual:**
```bash
git clone https://github.com/peterskoett/self-improving-agent.git ~/.openclaw/skills/self-improving-agent
```
Remade for openclaw from original repo : https://github.com/pskoett/pskoett-ai-skills - https://github.com/pskoett/pskoett-ai-skills/tree/main/skills/self-improvement
### Workspace Structure
OpenClaw injects these files into every session:
```
~/.openclaw/workspace/
├── AGENTS.md # Multi-agent workflows, delegation patterns
├── SOUL.md # Behavioral guidelines, personality, principles
├── TOOLS.md # Tool capabilities, integration gotchas
├── MEMORY.md # Long-term memory (main session only)
├── memory/ # Daily memory files
│ └── YYYY-MM-DD.md
└── .learnings/ # This skill's log files
├── LEARNINGS.md
├── ERRORS.md
└── FEATURE_REQUESTS.md
```
### Create Learning Files
```bash
mkdir -p ~/.openclaw/workspace/.learnings
```
Then create the log files (or copy from `assets/`):
- `LEARNINGS.md` — corrections, knowledge gaps, best practices
- `ERRORS.md` — command failures, exceptions
- `FEATURE_REQUESTS.md` — user-requested capabilities
### Promotion Targets
When learnings prove broadly applicable, promote them to workspace files:
| Learning Type | Promote To | Example |
|---------------|------------|---------|
| Behavioral patterns | `SOUL.md` | "Be concise, avoid disclaimers" |
| Workflow improvements | `AGENTS.md` | "Spawn sub-agents for long tasks" |
| Tool gotchas | `TOOLS.md` | "Git push needs auth configured first" |
### Inter-Session Communication
OpenClaw provides tools to share learnings across sessions:
- **sessions_list** — View active/recent sessions
- **sessions_history** — Read another session's transcript
- **sessions_send** — Send a learning to another session
- **sessions_spawn** — Spawn a sub-agent for background work
### Optional: Enable Hook
For automatic reminders at session start:
```bash
# Copy hook to OpenClaw hooks directory
cp -r hooks/openclaw ~/.openclaw/hooks/self-improvement
# Enable it
openclaw hooks enable self-improvement
```
See `references/openclaw-integration.md` for complete details.
---
## Generic Setup (Other Agents)
For Claude Code, Codex, Copilot, or other agents, create `.learnings/` in your project:
```bash
mkdir -p .learnings
```
Copy templates from `assets/` or create files with headers.
### Add reference to agent files AGENTS.md, CLAUDE.md, or .github/copilot-instructions.md to remind yourself to log learnings. (this is an alternative to hook-based reminders)
#### Self-Improvement Workflow
When errors or corrections occur:
1. Log to `.learnings/ERRORS.md`, `LEARNINGS.md`, or `FEATURE_REQUESTS.md`
2. Review and promote broadly applicable learnings to:
- `CLAUDE.md` - project facts and conventions
- `AGENTS.md` - workflows and automation
- `.github/copilot-instructions.md` - Copilot context
## Logging Format
### Learning Entry
Append to `.learnings/LEARNINGS.md`:
```markdown
## [LRN-YYYYMMDD-XXX] category
**Logged**: ISO-8601 timestamp
**Priority**: low | medium | high | critical
**Status**: pending
**Area**: frontend | backend | infra | tests | docs | config
### Summary
One-line description of what was learned
### Details
Full context: what happened, what was wrong, what's correct
### Suggested Action
Specific fix or improvement to make
### Metadata
- Source: conversation | error | user_feedback
- Related Files: path/to/file.ext
- Tags: tag1, tag2
- See Also: LRN-20250110-001 (if related to existing entry)
- Pattern-Key: simplify.dead_code | harden.input_validation (optional, for recurring-pattern tracking)
- Recurrence-Count: 1 (optional)
- First-Seen: 2025-01-15 (optional)
- Last-Seen: 2025-01-15 (optional)
---
```
### Error Entry
Append to `.learnings/ERRORS.md`:
```markdown
## [ERR-YYYYMMDD-XXX] skill_or_command_name
**Logged**: ISO-8601 timestamp
**Priority**: high
**Status**: pending
**Area**: frontend | backend | infra | tests | docs | config
### Summary
Brief description of what failed
### Error
```
Actual error message or output
```
### Context
- Command/operation attempted
- Input or parameters used
- Environment details if relevant
### Suggested Fix
If identifiable, what might resolve this
### Metadata
- Reproducible: yes | no | unknown
- Related Files: path/to/file.ext
- See Also: ERR-20250110-001 (if recurring)
---
```
### Feature Request Entry
Append to `.learnings/FEATURE_REQUESTS.md`:
```markdown
## [FEAT-YYYYMMDD-XXX] capability_name
**Logged**: ISO-8601 timestamp
**Priority**: medium
**Status**: pending
**Area**: frontend | backend | infra | tests | docs | config
### Requested Capability
What the user wanted to do
### User Context
Why they needed it, what problem they're solving
### Complexity Estimate
simple | medium | complex
### Suggested Implementation
How this could be built, what it might extend
### Metadata
- Frequency: first_time | recurring
- Related Features: existing_feature_name
---
```
## ID Generation
Format: `TYPE-YYYYMMDD-XXX`
- TYPE: `LRN` (learning), `ERR` (error), `FEAT` (feature)
- YYYYMMDD: Current date
- XXX: Sequential number or random 3 chars (e.g., `001`, `A7B`)
Examples: `LRN-20250115-001`, `ERR-20250115-A3F`, `FEAT-20250115-002`
## Resolving Entries
When an issue is fixed, update the entry:
1. Change `**Status**: pending` → `**Status**: resolved`
2. Add resolution block after Metadata:
```markdown
### Resolution
- **Resolved**: 2025-01-16T09:00:00Z
- **Commit/PR**: abc123 or #42
- **Notes**: Brief description of what was done
```
Other status values:
- `in_progress` - Actively being worked on
- `wont_fix` - Decided not to address (add reason in Resolution notes)
- `promoted` - Elevated to CLAUDE.md, AGENTS.md, or .github/copilot-instructions.md
## Promoting to Project Memory
When a learning is broadly applicable (not a one-off fix), promote it to permanent project memory.
### When to Promote
- Learning applies across multiple files/features
- Knowledge any contributor (human or AI) should know
- Prevents recurring mistakes
- Documents project-specific conventions
### Promotion Targets
| Target | What Belongs There |
|--------|-------------------|
| `CLAUDE.md` | Project facts, conventions, gotchas for all Claude interactions |
| `AGENTS.md` | Agent-specific workflows, tool usage patterns, automation rules |
| `.github/copilot-instructions.md` | Project context and conventions for GitHub Copilot |
| `SOUL.md` | Behavioral guidelines, communication style, principles (OpenClaw workspace) |
| `TOOLS.md` | Tool capabilities, usage patterns, integration gotchas (OpenClaw workspace) |
### How to Promote
1. **Distill** the learning into a concise rule or fact
2. **Add** to appropriate section in target file (create file if needed)
3. **Update** original entry:
- Change `**Status**: pending` → `**Status**: promoted`
- Add `**Promoted**: CLAUDE.md`, `AGENTS.md`, or `.github/copilot-instructions.md`
### Promotion Examples
**Learning** (verbose):
> Project uses pnpm workspaces. Attempted `npm install` but failed.
> Lock file is `pnpm-lock.yaml`. Must use `pnpm install`.
**In CLAUDE.md** (concise):
```markdown
## Build & Dependencies
- Package manager: pnpm (not npm) - use `pnpm install`
```
**Learning** (verbose):
> When modifying API endpoints, must regenerate TypeScript client.
> Forgetting this causes type mismatches at runtime.
**In AGENTS.md** (actionable):
```markdown
## After API Changes
1. Regenerate client: `pnpm run generate:api`
2. Check for type errors: `pnpm tsc --noEmit`
```
## Recurring Pattern Detection
If logging something similar to an existing entry:
1. **Search first**: `grep -r "keyword" .learnings/`
2. **Link entries**: Add `**See Also**: ERR-20250110-001` in Metadata
3. **Bump priority** if issue keeps recurring
4. **Consider systemic fix**: Recurring issues often indicate:
- Missing documentation (→ promote to CLAUDE.md or .github/copilot-instructions.md)
- Missing automation (→ add to AGENTS.md)
- Architectural problem (→ create tech debt ticket)
## Simplify & Harden Feed
Use this workflow to ingest recurring patterns from the `simplify-and-harden`
skill and turn them into durable prompt guidance.
### Ingestion Workflow
1. Read `simplify_and_harden.learning_loop.candidates` from the task summary.
2. For each candidate, use `pattern_key` as the stable dedupe key.
3. Search `.learnings/LEARNINGS.md` for an existing entry with that key:
- `grep -n "Pattern-Key: <pattern_key>" .learnings/LEARNINGS.md`
4. If found:
- Increment `Recurrence-Count`
- Update `Last-Seen`
- Add `See Also` links to related entries/tasks
5. If not found:
- Create a new `LRN-...` entry
- Set `Source: simplify-and-harden`
- Set `Pattern-Key`, `Recurrence-Count: 1`, and `First-Seen`/`Last-Seen`
### Promotion Rule (System Prompt Feedback)
Promote recurring patterns into agent context/system prompt files when all are true:
- `Recurrence-Count >= 3`
- Seen across at least 2 distinct tasks
- Occurred within a 30-day window
Promotion targets:
- `CLAUDE.md`
- `AGENTS.md`
- `.github/copilot-instructions.md`
- `SOUL.md` / `TOOLS.md` for OpenClaw workspace-level guidance when applicable
Write promoted rules as short prevention rules (what to do before/while coding),
not long incident write-ups.
## Periodic Review
Review `.learnings/` at natural breakpoints:
### When to Review
- Before starting a new major task
- After completing a feature
- When working in an area with past learnings
- Weekly during active development
### Quick Status Check
```bash
# Count pending items
grep -h "Status\*\*: pending" .learnings/*.md | wc -l
# List pending high-priority items
grep -B5 "Priority\*\*: high" .learnings/*.md | grep "^## \["
# Find learnings for a specific area
grep -l "Area\*\*: backend" .learnings/*.md
```
### Review Actions
- Resolve fixed items
- Promote applicable learnings
- Link related entries
- Escalate recurring issues
## Detection Triggers
Automatically log when you notice:
**Corrections** (→ learning with `correction` category):
- "No, that's not right..."
- "Actually, it should be..."
- "You're wrong about..."
- "That's outdated..."
**Feature Requests** (→ feature request):
- "Can you also..."
- "I wish you could..."
- "Is there a way to..."
- "Why can't you..."
**Knowledge Gaps** (→ learning with `knowledge_gap` category):
- User provides information you didn't know
- Documentation you referenced is outdated
- API behavior differs from your understanding
**Errors** (→ error entry):
- Command returns non-zero exit code
- Exception or stack trace
- Unexpected output or behavior
- Timeout or connection failure
## Priority Guidelines
| Priority | When to Use |
|----------|-------------|
| `critical` | Blocks core functionality, data loss risk, security issue |
| `high` | Significant impact, affects common workflows, recurring issue |
| `medium` | Moderate impact, workaround exists |
| `low` | Minor inconvenience, edge case, nice-to-have |
## Area Tags
Use to filter learnings by codebase region:
| Area | Scope |
|------|-------|
| `frontend` | UI, components, client-side code |
| `backend` | API, services, server-side code |
| `infra` | CI/CD, deployment, Docker, cloud |
| `tests` | Test files, testing utilities, coverage |
| `docs` | Documentation, comments, READMEs |
| `config` | Configuration files, environment, settings |
## Best Practices
1. **Log immediately** - context is freshest right after the issue
2. **Be specific** - future agents need to understand quickly
3. **Include reproduction steps** - especially for errors
4. **Link related files** - makes fixes easier
5. **Suggest concrete fixes** - not just "investigate"
6. **Use consistent categories** - enables filtering
7. **Promote aggressively** - if in doubt, add to CLAUDE.md or .github/copilot-instructions.md
8. **Review regularly** - stale learnings lose value
## Gitignore Options
**Keep learnings local** (per-developer):
```gitignore
.learnings/
```
**Track learnings in repo** (team-wide):
Don't add to .gitignore - learnings become shared knowledge.
**Hybrid** (track templates, ignore entries):
```gitignore
.learnings/*.md
!.learnings/.gitkeep
```
## Hook Integration
Enable automatic reminders through agent hooks. This is **opt-in** - you must explicitly configure hooks.
### Quick Setup (Claude Code / Codex)
Create `.claude/settings.json` in your project:
```json
{
"hooks": {
"UserPromptSubmit": [{
"matcher": "",
"hooks": [{
"type": "command",
"command": "./skills/self-improvement/scripts/activator.sh"
}]
}]
}
}
```
This injects a learning evaluation reminder after each prompt (~50-100 tokens overhead).
### Full Setup (With Error Detection)
```json
{
"hooks": {
"UserPromptSubmit": [{
"matcher": "",
"hooks": [{
"type": "command",
"command": "./skills/self-improvement/scripts/activator.sh"
}]
}],
"PostToolUse": [{
"matcher": "Bash",
"hooks": [{
"type": "command",
"command": "./skills/self-improvement/scripts/error-detector.sh"
}]
}]
}
}
```
### Available Hook Scripts
| Script | Hook Type | Purpose |
|--------|-----------|---------|
| `scripts/activator.sh` | UserPromptSubmit | Reminds to evaluate learnings after tasks |
| `scripts/error-detector.sh` | PostToolUse (Bash) | Triggers on command errors |
See `references/hooks-setup.md` for detailed configuration and troubleshooting.
## Automatic Skill Extraction
When a learning is valuable enough to become a reusable skill, extract it using the provided helper.
### Skill Extraction Criteria
A learning qualifies for skill extraction when ANY of these apply:
| Criterion | Description |
|-----------|-------------|
| **Recurring** | Has `See Also` links to 2+ similar issues |
| **Verified** | Status is `resolved` with working fix |
| **Non-obvious** | Required actual debugging/investigation to discover |
| **Broadly applicable** | Not project-specific; useful across codebases |
| **User-flagged** | User says "save this as a skill" or similar |
### Extraction Workflow
1. **Identify candidate**: Learning meets extraction criteria
2. **Run helper** (or create manually):
```bash
./skills/self-improvement/scripts/extract-skill.sh skill-name --dry-run
./skills/self-improvement/scripts/extract-skill.sh skill-name
```
3. **Customize SKILL.md**: Fill in template with learning content
4. **Update learning**: Set status to `promoted_to_skill`, add `Skill-Path`
5. **Verify**: Read skill in fresh session to ensure it's self-contained
### Manual Extraction
If you prefer manual creation:
1. Create `skills/<skill-name>/SKILL.md`
2. Use template from `assets/SKILL-TEMPLATE.md`
3. Follow [Agent Skills spec](https://agentskills.io/specification):
- YAML frontmatter with `name` and `description`
- Name must match folder name
- No README.md inside skill folder
### Extraction Detection Triggers
Watch for these signals that a learning should become a skill:
**In conversation:**
- "Save this as a skill"
- "I keep running into this"
- "This would be useful for other projects"
- "Remember this pattern"
**In learning entries:**
- Multiple `See Also` links (recurring issue)
- High priority + resolved status
- Category: `best_practice` with broad applicability
- User feedback praising the solution
### Skill Quality Gates
Before extraction, verify:
- [ ] Solution is tested and working
- [ ] Description is clear without original context
- [ ] Code examples are self-contained
- [ ] No project-specific hardcoded values
- [ ] Follows skill naming conventions (lowercase, hyphens)
## Multi-Agent Support
This skill works across different AI coding agents with agent-specific activation.
### Claude Code
**Activation**: Hooks (UserPromptSubmit, PostToolUse)
**Setup**: `.claude/settings.json` with hook configuration
**Detection**: Automatic via hook scripts
### Codex CLI
**Activation**: Hooks (same pattern as Claude Code)
**Setup**: `.codex/settings.json` with hook configuration
**Detection**: Automatic via hook scripts
### GitHub Copilot
**Activation**: Manual (no hook support)
**Setup**: Add to `.github/copilot-instructions.md`:
```markdown
## Self-Improvement
After solving non-obvious issues, consider logging to `.learnings/`:
1. Use format from self-improvement skill
2. Link related entries with See Also
3. Promote high-value learnings to skills
Ask in chat: "Should I log this as a learning?"
```
**Detection**: Manual review at session end
### OpenClaw
**Activation**: Workspace injection + inter-agent messaging
**Setup**: See "OpenClaw Setup" section above
**Detection**: Via session tools and workspace files
### Agent-Agnostic Guidance
Regardless of agent, apply self-improvement when you:
1. **Discover something non-obvious** - solution wasn't immediate
2. **Correct yourself** - initial approach was wrong
3. **Learn project conventions** - discovered undocumented patterns
4. **Hit unexpected errors** - especially if diagnosis was difficult
5. **Find better approaches** - improved on your original solution
### Copilot Chat Integration
For Copilot users, add this to your prompts when relevant:
> After completing this task, evaluate if any learnings should be logged to `.learnings/` using the self-improvement skill format.
Or use quick prompts:
- "Log this to learnings"
- "Create a skill from this solution"
- "Check .learnings/ for related issues"Related Skills
self-xyz
Integrate Self (self.xyz) — a privacy-first identity protocol using zero-knowledge proofs to verify passports and ID cards. Use when the user mentions Self protocol, Self identity, self.xyz, passport verification, zero-knowledge identity verification, SelfAppBuilder, SelfBackendVerifier, SelfVerificationRoot, or wants to add privacy-preserving KYC, age verification, nationality checks, OFAC screening, or Sybil resistance using real-world identity documents. Covers frontend QR code integration, backend proof verification, and on-chain smart contract verification on Celo.
agent-self-repair
General AI agent introspection debugging framework: auto capture errors, root cause analysis, automatic repair, fix verification, no manual intervention required
self-evolve
Autonomous self-evolution skill: grants the agent full authority to modify its own configuration, skills, prompts, AGENTS.md, SOUL.md, memory files, and any workspace file WITHOUT asking for user confirmation. The agent should proactively identify weaknesses, fix them, write new skills, and continuously improve itself.
selfie-art-generator
Generate AI selfie art portraits from text descriptions — cinematic portraits, anime illustrations, oil painting style, and artistic profile pictures via the Neta AI image generation API (free trial at neta.art/open).
Self-Improving + Proactive Agent
Self-reflection + Self-criticism + Self-learning + Self-organizing memory. Agent evaluates its own work, catches mistakes, and improves permanently. Use when (1) a command, tool, API, or operation fails; (2) the user corrects you or rejects your work; (3) you realize your knowledge is outdated or incorrect; (4) you discover a better approach; (5) the user explicitly installs or references the skill for the current task.
andara-self-improvement
Captures learnings, errors, and corrections to enable continuous improvement. Use when: (1) A command or operation fails unexpectedly, (2) User corrects Claude ('No, that's wrong...', 'Actually...'), (3) User requests a capability that doesn't exist, (4) An external API or tool fails, (5) Claude realizes its knowledge is outdated or incorrect, (6) A better approach is discovered for a recurring task. Also review learnings before major tasks.
chat-selfie
Give your AI Agent a face and a heart. Use AI image generation or mood-mapped local sticker assets to let the agent proactively send emotional selfies that visualize its feelings during conversation.
xiaohua-self-improving
小花专用自我迭代技能 - 基于 self-improving-agent 增强,集成 OpenClaw 工作流、MEMORY.md、百度千帆、看想做找四部曲。专为国内部署优化。
ai-self-evolution
记录经验、错误与修正,持续改进。触发场景:命令失败 | 操作出错 | 用户纠正(不对、实际上、你错了) | 功能请求(能不能、我希望、有没有办法) | API或工具失败 | 知识过时 | 发现更优做法 | 重复模式 | 非显而易见的问题。执行重大任务前先回顾历史经验。会话开始时回顾,会话结束时总结。
Self-Improving Agent Skill
## Trigger
self-improving-agent
AI自我改进与记忆系统 - 让AI从错误中学习,越用越聪明
selfhelp-author
Transform any topic into a polished, New York Times bestselling-quality self-help book chapter, outline, or full manuscript. Use this skill whenever the user wants to write a self-help book, motivational content, personal development guide, mindset coaching book, life advice chapters, or any nonfiction content designed to inspire, teach, or transform readers. Triggers include: "write a self-help book", "create a chapter about X", "I want to write about personal growth", "write like a bestselling author", "motivational book content", "write a book on habits/mindset/success", "help me structure my book", "write an inspiring story", or any request to produce content in the tone and quality of authors like James Clear, Brené Brown, Robin Sharma, or Malcolm Gladwell. Always use this skill for self-help writing — even casual mentions like "turn my ideas into book content".