debug-assist
Structured debugging workflow that prevents circular debugging. Follows a systematic approach of Reproduce, Isolate, Hypothesize, Verify, Fix, Test. Logs the debugging path as an artifact to avoid revisiting dead ends.
Best use case
debug-assist is best used when you need a repeatable AI agent workflow instead of a one-off prompt.
Structured debugging workflow that prevents circular debugging. Follows a systematic approach of Reproduce, Isolate, Hypothesize, Verify, Fix, Test. Logs the debugging path as an artifact to avoid revisiting dead ends.
Teams using debug-assist should expect a more consistent output, faster repeated execution, less prompt rewriting.
When to use this skill
- You want a reusable workflow that can be run more than once with consistent structure.
When not to use this skill
- You only need a quick one-off answer and do not need a reusable workflow.
- You cannot install or maintain the underlying files, dependencies, or repository context.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/debug-assist/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How debug-assist Compares
| Feature / Agent | debug-assist | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | Unknown | N/A |
Frequently Asked Questions
What does this skill do?
Structured debugging workflow that prevents circular debugging. Follows a systematic approach of Reproduce, Isolate, Hypothesize, Verify, Fix, Test. Logs the debugging path as an artifact to avoid revisiting dead ends.
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
Related Guides
SKILL.md Source
# Skill: Debug Assist ## What This Skill Does Provides a **systematic debugging workflow** that prevents the common problem of circular debugging (try something → doesn't work → try something else → forget what was tried → repeat). Logs all hypotheses and results as a debugging trace. ## When to Use - When a bug is non-trivial (not an obvious typo) - When initial fix attempts haven't worked - When the user says "I can't figure out why X happens" ## Execution Model - **Always**: the primary agent runs this skill directly. - **Output**: chat-based debugging trace + the fix itself. ## Workflow ### Step 1: Define the Bug Clarify with the `question` tool: 1. **What's the expected behavior?** 2. **What's the actual behavior?** 3. **When did it start?** (after a specific change? always?) 4. **Is it reproducible?** (always, sometimes, only in certain conditions?) ### Step 2: Reproduce Reproduce the bug locally: ```bash # Run the failing scenario <command that triggers the bug> ``` If it doesn't reproduce → it's environment-specific. Check: OS, versions, configuration, data. **Log:** "Reproduced: Yes/No, with command: X" ### Step 3: Isolate Narrow down the failure: 1. **Which file** is the error in? (from stack trace or error message) 2. **Which function** fails? (add logging if needed) 3. **Which input** triggers it? (test with minimal input) **Log:** "Isolated to: <file>:<function>, triggered by: <input>" ### Step 4: Hypothesize List possible causes (max 3): 1. Hypothesis A: <what might be wrong> 2. Hypothesis B: <alternative cause> 3. Hypothesis C: <less likely but possible> **Do NOT fix yet.** Just list hypotheses. ### Step 5: Verify Test each hypothesis: - Add targeted logging or assertions - Run the failing scenario - Check which hypothesis matches **Log each result:** "Hypothesis A: confirmed/rejected because <evidence>" ### Step 6: Fix Apply the minimal fix for the confirmed hypothesis. - **One change**: fix only the confirmed root cause - **No refactoring**: fix the bug, nothing else ### Step 7: Test 1. Run the original failing scenario → must pass 2. Run the full test suite → no regressions 3. Remove any debugging logging ### Step 8: Document If the bug was non-obvious, document it: - What was the root cause? - Why was it non-obvious? - How to prevent it in the future? Consider creating a test case that specifically prevents regression. ## Rules 1. **Log the path**: every hypothesis and its result must be recorded. This prevents circular debugging. 2. **Reproduce first**: never fix a bug you can't reproduce. You won't know if the fix works. 3. **Isolate before fixing**: narrow down to the smallest possible scope before changing code. 4. **One hypothesis at a time**: test one hypothesis, get a result, then move to the next. 5. **Minimal fix**: fix only the bug. No improvements, no refactoring, no "while I'm here." 6. **No built-in explore agent**: do NOT use the built-in `explore` subagent type.
Related Skills
distributed-debugging-debug-trace
You are a debugging expert specializing in setting up comprehensive debugging environments, distributed tracing, and diagnostic tools. Configure debugging workflows, implement tracing solutions, an...
error-debugging-multi-agent-review
Use when working with error debugging multi agent review
shabbat-times
Access Jewish calendar data and Shabbat times via Hebcal API. Use when building apps with Shabbat times, Jewish holidays, Hebrew dates, or Zmanim. Triggers on Shabbat times, Hebcal, Jewish calendar, Hebrew date, Zmanim.
mcp:setup-serena-mcp
Guide for setup Serena MCP server for semantic code retrieval and editing capabilities
mcp:setup-context7-mcp
Guide for setup Context7 MCP server to load documentation for specific technologies.
server-management
Server management principles and decision-making. Process management, monitoring strategy, and scaling decisions. Teaches thinking, not commands.
serpapi-automation
Automate Serpapi tasks via Rube MCP (Composio). Always search tools first for current schemas.
segment-cdp
Expert patterns for Segment Customer Data Platform including Analytics.js, server-side tracking, tracking plans with Protocols, identity resolution, destinations configuration, and data governance ...
seatbelt-sandboxer
Generates minimal macOS Seatbelt sandbox configurations. Use when sandboxing, isolating, or restricting macOS applications with allowlist-based profiles.
scvi-tools
Deep generative models for single-cell omics. Use when you need probabilistic batch correction (scVI), transfer learning, differential expression with uncertainty, or multi-modal integration (TOTALVI, MultiVI). Best for advanced modeling, batch effects, multimodal data. For standard analysis pipelines use scanpy.
scrapingbee-automation
Automate Scrapingbee tasks via Rube MCP (Composio). Always search tools first for current schemas.
scrapingant-automation
Automate Scrapingant tasks via Rube MCP (Composio). Always search tools first for current schemas.