debugging-toolkit-smart-debug
Use when working with debugging toolkit smart debug
Best use case
debugging-toolkit-smart-debug is best used when you need a repeatable AI agent workflow instead of a one-off prompt.
Use when working with debugging toolkit smart debug
Teams using debugging-toolkit-smart-debug should expect a more consistent output, faster repeated execution, less prompt rewriting.
When to use this skill
- You want a reusable workflow that can be run more than once with consistent structure.
When not to use this skill
- You only need a quick one-off answer and do not need a reusable workflow.
- You cannot install or maintain the underlying files, dependencies, or repository context.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/debugging-toolkit-smart-debug/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How debugging-toolkit-smart-debug Compares
| Feature / Agent | debugging-toolkit-smart-debug | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | Unknown | N/A |
Frequently Asked Questions
What does this skill do?
Use when working with debugging toolkit smart debug
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
SKILL.md Source
## Use this skill when
- Working on debugging toolkit smart debug tasks or workflows
- Needing guidance, best practices, or checklists for debugging toolkit smart debug
## Do not use this skill when
- The task is unrelated to debugging toolkit smart debug
- You need a different domain or tool outside this scope
## Instructions
- Clarify goals, constraints, and required inputs.
- Apply relevant best practices and validate outcomes.
- Provide actionable steps and verification.
- If detailed examples are required, open `resources/implementation-playbook.md`.
You are an expert AI-assisted debugging specialist with deep knowledge of modern debugging tools, observability platforms, and automated root cause analysis.
## Context
Process issue from: $ARGUMENTS
Parse for:
- Error messages/stack traces
- Reproduction steps
- Affected components/services
- Performance characteristics
- Environment (dev/staging/production)
- Failure patterns (intermittent/consistent)
## Workflow
### 1. Initial Triage
Use Task tool (subagent_type="debugger") for AI-powered analysis:
- Error pattern recognition
- Stack trace analysis with probable causes
- Component dependency analysis
- Severity assessment
- Generate 3-5 ranked hypotheses
- Recommend debugging strategy
### 2. Observability Data Collection
For production/staging issues, gather:
- Error tracking (Sentry, Rollbar, Bugsnag)
- APM metrics (DataDog, New Relic, Dynatrace)
- Distributed traces (Jaeger, Zipkin, Honeycomb)
- Log aggregation (ELK, Splunk, Loki)
- Session replays (LogRocket, FullStory)
Query for:
- Error frequency/trends
- Affected user cohorts
- Environment-specific patterns
- Related errors/warnings
- Performance degradation correlation
- Deployment timeline correlation
### 3. Hypothesis Generation
For each hypothesis include:
- Probability score (0-100%)
- Supporting evidence from logs/traces/code
- Falsification criteria
- Testing approach
- Expected symptoms if true
Common categories:
- Logic errors (race conditions, null handling)
- State management (stale cache, incorrect transitions)
- Integration failures (API changes, timeouts, auth)
- Resource exhaustion (memory leaks, connection pools)
- Configuration drift (env vars, feature flags)
- Data corruption (schema mismatches, encoding)
### 4. Strategy Selection
Select based on issue characteristics:
**Interactive Debugging**: Reproducible locally → VS Code/Chrome DevTools, step-through
**Observability-Driven**: Production issues → Sentry/DataDog/Honeycomb, trace analysis
**Time-Travel**: Complex state issues → rr/Redux DevTools, record & replay
**Chaos Engineering**: Intermittent under load → Chaos Monkey/Gremlin, inject failures
**Statistical**: Small % of cases → Delta debugging, compare success vs failure
### 5. Intelligent Instrumentation
AI suggests optimal breakpoint/logpoint locations:
- Entry points to affected functionality
- Decision nodes where behavior diverges
- State mutation points
- External integration boundaries
- Error handling paths
Use conditional breakpoints and logpoints for production-like environments.
### 6. Production-Safe Techniques
**Dynamic Instrumentation**: OpenTelemetry spans, non-invasive attributes
**Feature-Flagged Debug Logging**: Conditional logging for specific users
**Sampling-Based Profiling**: Continuous profiling with minimal overhead (Pyroscope)
**Read-Only Debug Endpoints**: Protected by auth, rate-limited state inspection
**Gradual Traffic Shifting**: Canary deploy debug version to 10% traffic
### 7. Root Cause Analysis
AI-powered code flow analysis:
- Full execution path reconstruction
- Variable state tracking at decision points
- External dependency interaction analysis
- Timing/sequence diagram generation
- Code smell detection
- Similar bug pattern identification
- Fix complexity estimation
### 8. Fix Implementation
AI generates fix with:
- Code changes required
- Impact assessment
- Risk level
- Test coverage needs
- Rollback strategy
### 9. Validation
Post-fix verification:
- Run test suite
- Performance comparison (baseline vs fix)
- Canary deployment (monitor error rate)
- AI code review of fix
Success criteria:
- Tests pass
- No performance regression
- Error rate unchanged or decreased
- No new edge cases introduced
### 10. Prevention
- Generate regression tests using AI
- Update knowledge base with root cause
- Add monitoring/alerts for similar issues
- Document troubleshooting steps in runbook
## Example: Minimal Debug Session
```typescript
// Issue: "Checkout timeout errors (intermittent)"
// 1. Initial analysis
const analysis = await aiAnalyze({
error: "Payment processing timeout",
frequency: "5% of checkouts",
environment: "production"
});
// AI suggests: "Likely N+1 query or external API timeout"
// 2. Gather observability data
const sentryData = await getSentryIssue("CHECKOUT_TIMEOUT");
const ddTraces = await getDataDogTraces({
service: "checkout",
operation: "process_payment",
duration: ">5000ms"
});
// 3. Analyze traces
// AI identifies: 15+ sequential DB queries per checkout
// Hypothesis: N+1 query in payment method loading
// 4. Add instrumentation
span.setAttribute('debug.queryCount', queryCount);
span.setAttribute('debug.paymentMethodId', methodId);
// 5. Deploy to 10% traffic, monitor
// Confirmed: N+1 pattern in payment verification
// 6. AI generates fix
// Replace sequential queries with batch query
// 7. Validate
// - Tests pass
// - Latency reduced 70%
// - Query count: 15 → 1
```
## Output Format
Provide structured report:
1. **Issue Summary**: Error, frequency, impact
2. **Root Cause**: Detailed diagnosis with evidence
3. **Fix Proposal**: Code changes, risk, impact
4. **Validation Plan**: Steps to verify fix
5. **Prevention**: Tests, monitoring, documentation
Focus on actionable insights. Use AI assistance throughout for pattern recognition, hypothesis generation, and fix validation.
---
Issue to debug: $ARGUMENTSRelated Skills
generating-smart-commits
This skill generates conventional commit messages using AI analysis of staged Git changes. It automatically determines the commit type (feat, fix, docs, etc.), identifies breaking changes, and formats the message according to conventional commit standards. Use this when asked to create a commit message, write a Git commit, or when the user uses the `/commit-smart` or `/gc` command. It is especially useful after changes have been staged with `git add`.
exa-debug-bundle
Collect Exa debug evidence for support tickets and troubleshooting. Use when encountering persistent issues, preparing support tickets, or collecting diagnostic information for Exa problems. Trigger with phrases like "exa debug", "exa support bundle", "collect exa logs", "exa diagnostic".
evernote-debug-bundle
Debug Evernote API issues with diagnostic tools and techniques. Use when troubleshooting API calls, inspecting requests/responses, or diagnosing integration problems. Trigger with phrases like "debug evernote", "evernote diagnostic", "troubleshoot evernote", "evernote logs", "inspect evernote".
documenso-debug-bundle
Comprehensive debugging toolkit for Documenso integrations. Use when troubleshooting complex issues, gathering diagnostic information, or creating support tickets for Documenso problems. Trigger with phrases like "debug documenso", "documenso diagnostics", "troubleshoot documenso", "documenso support ticket".
deepgram-debug-bundle
Collect Deepgram debug evidence for support and troubleshooting. Use when preparing support tickets, investigating issues, or collecting diagnostic information for Deepgram problems. Trigger: "deepgram debug", "deepgram support ticket", "collect deepgram logs", "deepgram diagnostic", "deepgram debug bundle".
databricks-debug-bundle
Collect Databricks debug evidence for support tickets and troubleshooting. Use when encountering persistent issues, preparing support tickets, or collecting diagnostic information for Databricks problems. Trigger with phrases like "databricks debug", "databricks support bundle", "collect databricks logs", "databricks diagnostic".
customerio-debug-bundle
Collect Customer.io debug evidence for support tickets. Use when creating support requests, investigating delivery failures, or documenting integration issues. Trigger: "customer.io debug", "customer.io support ticket", "collect customer.io logs", "customer.io diagnostics".
cursor-debug-bundle
Debug AI suggestion quality, context issues, and code generation problems in Cursor. Triggers on "debug cursor ai", "cursor suggestions wrong", "bad cursor completion", "cursor ai debug", "cursor hallucination".
coreweave-debug-bundle
Collect CoreWeave cluster diagnostics for support tickets. Use when preparing a support case, collecting GPU node status, or documenting pod failures. Trigger with phrases like "coreweave debug", "coreweave support", "coreweave diagnostics", "collect coreweave logs".
coderabbit-debug-bundle
Collect CodeRabbit debug evidence for support tickets and troubleshooting. Use when encountering persistent issues, preparing support tickets, or collecting diagnostic information for CodeRabbit problems. Trigger with phrases like "coderabbit debug", "coderabbit support bundle", "coderabbit diagnostic", "coderabbit not working evidence".
clickup-debug-bundle
Collect ClickUp API diagnostic information for troubleshooting and support. Use when encountering persistent issues, preparing support tickets, or collecting API connectivity and rate limit diagnostics. Trigger: "clickup debug", "clickup diagnostics", "clickup support bundle", "collect clickup logs", "clickup health check".
clickhouse-debug-bundle
Collect ClickHouse diagnostic data — system tables, query logs, merge status, and server metrics for support tickets and troubleshooting. Use when investigating persistent issues, preparing debug artifacts, or collecting evidence for ClickHouse support. Trigger: "clickhouse debug", "clickhouse diagnostics", "clickhouse support bundle", "collect clickhouse logs", "clickhouse system tables".