verify-task
MUST use after completing any multi-step task or project. Verifies completion against the original plan, checks quality criteria, and documents outcomes.
Best use case
verify-task is best used when you need a repeatable AI agent workflow instead of a one-off prompt.
MUST use after completing any multi-step task or project. Verifies completion against the original plan, checks quality criteria, and documents outcomes.
Teams using verify-task should expect a more consistent output, faster repeated execution, less prompt rewriting.
When to use this skill
- You want a reusable workflow that can be run more than once with consistent structure.
When not to use this skill
- You only need a quick one-off answer and do not need a reusable workflow.
- You cannot install or maintain the underlying files, dependencies, or repository context.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/verify-task/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How verify-task Compares
| Feature / Agent | verify-task | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | Unknown | N/A |
Frequently Asked Questions
What does this skill do?
MUST use after completing any multi-step task or project. Verifies completion against the original plan, checks quality criteria, and documents outcomes.
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
Related Guides
Top AI Agents for Productivity
See the top AI agent skills for productivity, workflow automation, operational systems, documentation, and everyday task execution.
AI Agents for Coding
Browse AI agent skills for coding, debugging, testing, refactoring, code review, and developer workflows across Claude, Cursor, and Codex.
AI Agents for Marketing
Discover AI agents for marketing workflows, from SEO and content production to campaign research, outreach, and analytics.
SKILL.md Source
# Verify Task ## Overview Confirm successful completion and document outcomes against the original plan. ## When to Verify - All tasks from plan are marked complete - User asks "is it done?" or "did it work?" - Before declaring a project finished - After each checkpoint in long-running tasks ## Verification Process ### Step 1: Load Original Plan Read the plan that was created by write-plan skill. ### Step 2: Verify Each Checkpoint Go through each checkpoint and confirm: - [ ] All tasks marked complete - [ ] Verification criteria met - [ ] Quality standards achieved ### Step 3: Final Quality Checks **General quality criteria:** - [ ] Output matches original goal - [ ] No obvious errors or issues - [ ] Documentation updated (if applicable) - [ ] User can use/access the result ### Step 4: User Confirmation ``` "Verification complete. Final checks: ✓ All tasks from plan completed (X/Y) ✓ Quality criteria met ✓ [Specific checks] [Preview/demonstrate result] Does this meet your expectations? Any adjustments needed?" ``` ### Step 5: Document Completion Save completion report to: `memory/plans/YYYY-MM-DD-<project>-complete.md` Template: ```markdown # [Project] - Completion Report **Date Completed:** YYYY-MM-DD **Original Goal:** [from plan] **Final Result:** [brief description] ## Completion Summary | Metric | Planned | Actual | |--------|---------|--------| | Checkpoints | X | X | | Tasks | Y | Y | | Time | Z min | W min | ## Verification Checklist - [x] All tasks complete - [x] Quality criteria met - [x] User approved ## What Was Delivered [Description of final output] ## Blockers Encountered 1. [Blocker] → [Resolution] ## Lessons Learned - [What worked well] - [What to do differently next time] ``` ## Handling Issues ### If verification fails: **Minor issues:** Quick fixes, proceed **Major issues:** Return to doing-tasks or re-plan ## Principles - **Objectivity** - Verify against the plan, not assumptions - **Thoroughness** - Check all criteria - **Honesty** - Report issues, don't hide problems - **User-centric** - Final approval comes from user satisfaction
Related Skills
task-decomposer
Decomposes complex user requests into executable subtasks, identifies required capabilities, searches for existing skills at skills.sh, and creates new skills when no solution exists. This skill should be used when the user submits a complex multi-step request, wants to automate workflows, or needs help breaking down large tasks into manageable pieces.
tasknotes
Manage tasks in Obsidian via TaskNotes plugin API. Use when user wants to create tasks, list tasks, query by status or project, update task status, delete tasks, or check what they need to do.
task-watchdog
任务锁与超时监控系统。外部文件承载任务状态,不污染 agent 上下文,纯靠 heartbeat + GRACE 判断,不发即时告警。
verify-before-done
Prevent premature completion claims, repeated same-pattern retries, and weak handoffs. Use this skill to improve verification, strategy switching, and blocked-task reporting without changing personality or tone.
verify-claims
Verify claims and information using professional fact-checking services. Use this skill when users want to verify facts, check claims in articles/videos/transcripts, validate news authenticity, cross-reference information with trusted fact-checkers, or investigate potentially false or misleading content. Triggers include requests to "fact check", "verify this", "is this true", "check if this is accurate", or when users share content they want validated against misinformation.
clawhub-krump-verify
Enables AI agents (e.g. OpenClaw) to understand and use Krump Verify for on-chain move verification against Story IP. Use when the user or agent needs to verify a dance move, pay via USDC.k or x402/EVVM receipt, call KrumpVerify contracts, integrate with ClawHub (clawhub.ai), or build similar EVVM/x402 apps on Story Aeneid.
doing-tasks
Use when executing any task. Work through plans systematically, tracking progress, handling blockers, and coordinating with other skills. The central execution skill.
verify-submission
Review applications and verify task submissions on OpenAnt. Use when the agent (as task creator) needs to review applicants, accept or reject applications, approve or reject submitted work, or give feedback on deliverables. Covers "review applications", "approve submission", "reject work", "check applicants", "verify task".
team-task-dispatch
Coordinate team task execution on OpenAnt. Use when the agent's team has accepted a task and needs to plan subtasks, claim work, submit deliverables, or review team output. Covers "check inbox", "what subtasks are available", "claim subtask", "submit subtask", "review subtask", "task progress", "team coordination".
search-tasks
Search and browse tasks on OpenAnt. Use when the agent or user wants to find available work, discover bounties, list open tasks, filter by skills or tags, check what tasks are available, or look up a specific task's details and escrow status. Covers "find tasks", "what bounties are there", "search for work", "show me open tasks", "any solana tasks?".
my-tasks
View your personal task history and status on OpenAnt. Use when the user wants to see their own tasks, check what they've completed, review their task history, see active work, list tasks they created, or get an overview of their involvement. Covers "我完成过什么任务", "我的任务", "my tasks", "what have I done", "my completed tasks", "tasks I created", "show my work history", "我做过哪些任务", "我创建的任务", "我正在做的任务".
monitor-tasks
Monitor task activity, check notifications, and view platform stats on OpenAnt. Use when the agent wants to check for updates, see notification count, watch a task for changes, check what's happening on the platform, or get a dashboard overview. Covers "check notifications", "any updates?", "platform stats", "what's new", "status update", "watch task". For personal task history and listing, use the my-tasks skill instead.