Skill: Debugging & Testing Standards
> **Skill ID**: `SKILL_DEBUGGING`
Best use case
Skill: Debugging & Testing Standards is best used when you need a repeatable AI agent workflow instead of a one-off prompt.
> **Skill ID**: `SKILL_DEBUGGING`
Teams using Skill: Debugging & Testing Standards should expect a more consistent output, faster repeated execution, less prompt rewriting.
When to use this skill
- You want a reusable workflow that can be run more than once with consistent structure.
When not to use this skill
- You only need a quick one-off answer and do not need a reusable workflow.
- You cannot install or maintain the underlying files, dependencies, or repository context.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/debugging/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How Skill: Debugging & Testing Standards Compares
| Feature / Agent | Skill: Debugging & Testing Standards | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | Unknown | N/A |
Frequently Asked Questions
What does this skill do?
> **Skill ID**: `SKILL_DEBUGGING`
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
Related Guides
SKILL.md Source
# Skill: Debugging & Testing Standards
> **Skill ID**: `SKILL_DEBUGGING`
> **Tags**: `debug-rules`, `logging`, `testing`, `polyglot-qa`
> **Version**: 1.2
> **Related Knowledge**: `KNOWLEDGE_QA.md`
## 1. Logging Standards
**Rationale**: `print()` or `console.log()` statements are often unbuffered, unclassified, and hard to filter. Structured logging provides timestamps, levels, and output control.
### Rules
- **MUST**: Use the standard logging framework for the current stack (see `KNOWLEDGE_QA.md`).
- **MUST**: Use a standard formatter that includes `[TIMESTAMP][LEVEL][LOGGER_NAME]`.
- **FORBIDDEN**: Do NOT use raw print/console statements for production output.
### Levels (Standard Mapping)
- `DEBUG`: Detailed diagnostic info.
- `INFO`: Confirmation of working execution.
- `WARNING`: Unexpected events, but app still running.
- `ERROR`: Functionality or transaction failed.
- `CRITICAL/FATAL`: App crash imminent or data corruption.
## 2. Exception & Error Handling
**Rationale**: Silent failures or broad catches hide bugs and make root-cause analysis impossible.
### Rules
- **MUST**: Catch specific exceptions/errors.
- **FORBIDDEN**: "Empty" catch blocks (e.g., `except: pass` or `catch(e){}`).
- **MUST**: Include the full error context and stack trace (if available) when logging Errors.
## 3. Testing Standards
**Rationale**: Tests ensure reliability and prevent regressions across different language stacks.
### Rules
- **Framework**: Use the stack's standard runner (e.g., `pytest`, `jest`, `testing`, `google-test`, `junit`, `XCTest`).
- **Location**: Store tests in a predictable structure (e.g., `tests/` directory or co-located `.test.` files).
- **Naming**: Follow the discovery convention of the chosen framework.
- **Styles**:
- **Unit Tests**: Focus on logic; mock external dependencies.
- **Integration Tests**: Verify real interactions between components/services.
## 4. Platform-Specific Debugging
- **Tkinter (Python)**: Trap `TclError` and redirect `stderr` to a log file.
- **Node.js**: Use `pino` for performance; redact sensitive keys.
- **Go**: Use `slog` for structured concurrency-safe logging.
- **Mobile**: Use `Timber` (Android) or `OSLog` (iOS) with proper privacy redaction.
## 5. LLM-Assisted Testing Workflow
**Rationale**: Leverage AI to maintain high test coverage with minimal friction.
### Feature Completion Protocol
**Trigger**: When the Agent finishes implementing a single feature/function.
1. **Ask**: "Should I write a unit test for this feature?"
2. **Action**: If User says **YES**:
- Build a test file using the appropriate stack framework.
- **Temp Logging**: Enable maximum verbosity (e.g., `DEBUG` level) outputting to a temporary file (`debug_test.log`) during the test run to capture internal state.
- Run the test and report results.
- Ask if the temp logging file should be kept.
### Batch Implementation Protocol
**Trigger**: When the Agent finishes a series of related features (a "Batch").
1. **Ask**: "Should I perform an Integration Test?"
2. **Action**: If User says **YES**:
- Verify interaction between the newly created/modified components.
- Use the integration suite of the target stack.
- Report verification results.Related Skills
vuln-scan
Multi-language dependency security scan - Use Safety CLI and OSV-Scanner to quickly detect dependency vulnerabilities in Python/JS/Java projects
SKILL_ONBOARDING.md
> **Purpose**: Conduct a one-time "Handshake Interview" with the user to establish their Developer Persona.
usb-debug
No description provided.
sql-lint
SQL code style check - Use SQLFluff to check SQL statement style and syntax (supports PostgreSQL, MySQL, SQLite, etc.)
serial-debug
No description provided.
security-check
Check dependency security vulnerabilities
rust-lint
Rust code quality check - Use Clippy and Rustfmt to ensure Rust code standards and performance optimization
run-tests
Run project test suite
register-debug
No description provided.
owasp-scan
OWASP dependency vulnerability scan - Use OWASP Dependency-Check to detect known CVE vulnerabilities in project dependencies
memory-guardian
Cross-platform memory monitoring and cleanup skill for AI development environments
Skill: Model Context Protocol (MCP)
## Purpose