desloppify
Codebase health scanner and technical debt tracker. Use when the user asks about code quality, technical debt, dead code, large files, god classes, duplicate functions, code smells, naming issues, import cycles, or coupling problems. Also use when asked for a health score, what to fix next, or to create a cleanup plan. Supports 29 languages.
About this skill
Desloppify is a powerful AI agent skill designed to act as a comprehensive codebase health scanner and technical debt tracker. It empowers AI agents to autonomously identify, prioritize, and rectify a wide array of code quality issues, including dead code, large files, god classes, duplicate functions, code smells, naming inconsistencies, import cycles, and coupling problems. This skill supports 29 different programming languages, making it highly versatile for diverse projects. The agent's workflow with Desloppify is structured into a continuous cycle: scan, plan, execute, and rescan. It emphasizes a thorough, honest approach to maximizing the 'strict score' of the codebase. Agents are instructed to follow the tool's specific directives, rather than relying on their own general analysis, ensuring consistency and adherence to best practices. Desloppify encourages agents to undertake both large-scale refactors and meticulous, single-line fixes with equal diligence. Its three-phase process involves an initial codebase scan and review, followed by a planning phase where the agent triages and organizes remediation tasks based on `desloppify`'s output, and finally, an execution phase where the identified improvements are implemented. This systematic approach ensures that code quality improvements are not just minimal but proper and comprehensive.
Best use case
The primary use case for Desloppify is to systematically improve and maintain the health of a software codebase. It's invaluable for development teams and individual developers seeking to reduce technical debt, enhance code readability, and prevent the accumulation of code smells. By automating the identification and remediation of quality issues, it allows developers to focus on feature development while ensuring their codebase remains robust and maintainable.
Codebase health scanner and technical debt tracker. Use when the user asks about code quality, technical debt, dead code, large files, god classes, duplicate functions, code smells, naming issues, import cycles, or coupling problems. Also use when asked for a health score, what to fix next, or to create a cleanup plan. Supports 29 languages.
A codebase with improved health scores, reduced technical debt, and a higher standard of code quality achieved through systematic scanning, planning, and execution.
Practical example
Example input
Can you analyze my Python and JavaScript codebase for any technical debt or code quality issues, then provide a plan to fix them?
Example output
I've scanned your codebase. The current strict score is 65/100. I found 15 god classes, 30 instances of duplicate code, and several import cycles. I've initiated a plan to address the highest-priority issues, starting with consolidating helper functions and breaking down a large 'utils.py' file.
When to use this skill
- When assessing the overall health and quality of a new or existing codebase.
- When seeking to identify and prioritize technical debt, dead code, or common code smells.
- When planning and executing refactoring efforts to improve code structure and maintainability.
- When needing a systematic approach to ongoing code quality improvement across multiple languages.
When not to use this skill
- For generating new code or implementing new features from scratch.
- For tasks unrelated to code quality or technical debt, such as performance profiling or security auditing.
- When the primary goal is rapid prototyping without immediate concern for long-term maintainability.
- If the target codebase is not supported by any of the 29 languages.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/desloppify/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How desloppify Compares
| Feature / Agent | desloppify | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | medium | N/A |
Frequently Asked Questions
What does this skill do?
Codebase health scanner and technical debt tracker. Use when the user asks about code quality, technical debt, dead code, large files, god classes, duplicate functions, code smells, naming issues, import cycles, or coupling problems. Also use when asked for a health score, what to fix next, or to create a cleanup plan. Supports 29 languages.
How difficult is it to install?
The installation complexity is rated as medium. You can find the installation instructions above.
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
Related Guides
AI Agents for Coding
Browse AI agent skills for coding, debugging, testing, refactoring, code review, and developer workflows across Claude, Cursor, and Codex.
Cursor vs Codex for AI Workflows
Compare Cursor and Codex for AI coding workflows, repository assistance, debugging, refactoring, and reusable developer skills.
Best AI Skills for Claude
Explore the best AI skills for Claude and Claude Code across coding, research, workflow automation, documentation, and agent operations.
SKILL.md Source
<!-- desloppify-begin -->
<!-- desloppify-skill-version: 5 -->
# Desloppify
## 1. Your Job
Maximise the **strict score** honestly. Your main cycle: **scan → plan → execute → rescan**. Follow the scan output's **INSTRUCTIONS FOR AGENTS** — don't substitute your own analysis.
**Don't be lazy.** Do large refactors and small detailed fixes with equal energy. If it takes touching 20 files, touch 20 files. If it's a one-line change, make it. No task is too big or too small — fix things properly, not minimally.
## 2. The Workflow
Three phases, repeated as a cycle.
### Phase 1: Scan and review — understand the codebase
```bash
desloppify scan --path . # analyse the codebase
desloppify status # check scores — are we at target?
```
The scan will tell you if subjective dimensions need review. Follow its instructions. To trigger a review manually:
```bash
desloppify review --prepare # then follow your runner's review workflow
```
### Phase 2: Plan — decide what to work on
After reviews, triage stages and plan creation appear in the execution queue surfaced by `next`. Complete them in order — `next` tells you what each stage expects in the `--report`:
```bash
desloppify next # shows the next execution workflow step
desloppify plan triage --stage observe --report "themes and root causes..."
desloppify plan triage --stage reflect --report "comparison against completed work..."
desloppify plan triage --stage organize --report "summary of priorities..."
desloppify plan triage --complete --strategy "execution plan..."
```
For automated triage: `desloppify plan triage --run-stages --runner codex` (Codex) or `--runner claude` (Claude). Options: `--only-stages`, `--dry-run`, `--stage-timeout-seconds`.
Then shape the queue. **The plan shapes everything `next` gives you** — `next` is the execution queue, not the full backlog. Don't skip this step.
```bash
desloppify plan # see the living plan details
desloppify plan queue # compact execution queue view
desloppify plan reorder <pat> top # reorder — what unblocks the most?
desloppify plan cluster create <name> # group related issues to batch-fix
desloppify plan focus <cluster> # scope next to one cluster
desloppify plan skip <pat> # defer — hide from next
```
### Phase 3: Execute — grind the queue to completion
Trust the plan and execute. Don't rescan mid-queue — finish the queue first.
**Branch first.** Create a dedicated branch — never commit health work directly to main:
```bash
git checkout -b desloppify/code-health # or desloppify/<focus-area>
desloppify config set commit_pr 42 # link a PR for auto-updated descriptions
```
**The loop:**
```bash
# 1. Get the next item from the execution queue
desloppify next
# 2. Fix the issue in code
# 3. Resolve it (next shows the exact command including required attestation)
# 4. When you have a logical batch, commit and record
git add <files> && git commit -m "desloppify: fix 3 deferred_import findings"
desloppify plan commit-log record # moves findings uncommitted → committed, updates PR
# 5. Push periodically
git push -u origin desloppify/code-health
# 6. Repeat until the queue is empty
```
Score may temporarily drop after fixes — cascade effects are normal, keep going.
If `next` suggests an auto-fixer, run `desloppify autofix <fixer> --dry-run` to preview, then apply.
**When the queue is clear, go back to Phase 1.** New issues will surface, cascades will have resolved, priorities will have shifted. This is the cycle.
## 3. Reference
### Key concepts
- **Tiers**: T1 auto-fix → T2 quick manual → T3 judgment call → T4 major refactor.
- **Auto-clusters**: related findings are auto-grouped in `next`. Drill in with `next --cluster <name>`.
- **Zones**: production/script (scored), test/config/generated/vendor (not scored). Fix with `zone set`.
- **Wontfix cost**: widens the lenient↔strict gap. Challenge past decisions when the gap grows.
### Scoring
Overall score = **25% mechanical** + **75% subjective**.
- **Mechanical (25%)**: auto-detected issues — duplication, dead code, smells, unused imports, security. Fixed by changing code and rescanning.
- **Subjective (75%)**: design quality review — naming, error handling, abstractions, clarity. Starts at **0%** until reviewed. The scan will prompt you when a review is needed.
- **Strict score** is the north star: wontfix items count as open. The gap between overall and strict is your wontfix debt.
- **Score types**: overall (lenient), strict (wontfix counts), objective (mechanical only), verified (confirmed fixes only).
### Reviews
Four paths to get subjective scores:
- **Local runner (Codex)**: `desloppify review --run-batches --runner codex --parallel --scan-after-import` — automated end-to-end.
- **Local runner (Claude)**: `desloppify review --prepare` → launch parallel subagents → `desloppify review --import merged.json` — see skill doc overlay for details.
- **Cloud/external**: `desloppify review --external-start --external-runner claude` → follow session template → `--external-submit`.
- **Manual path**: `desloppify review --prepare` → review per dimension → `desloppify review --import file.json`.
- Import first, fix after — import creates tracked state entries for correlation.
- Target-matching scores trigger auto-reset to prevent gaming. Use the blind-review workflow described in your agent overlay doc (e.g. `docs/CLAUDE.md`, `docs/HERMES.md`).
- Even moderate scores (60-80) dramatically improve overall health.
- Stale dimensions auto-surface in `next` — just follow the queue.
**Integrity rules:** Score from evidence only — no prior chat context, score history, or target-threshold anchoring. When evidence is mixed, score lower and explain uncertainty. Assess every requested dimension; never drop one.
#### Review output format
Return machine-readable JSON for review imports. For `--external-submit`, include `session` from the generated template:
```json
{
"session": {
"id": "<session_id_from_template>",
"token": "<session_token_from_template>"
},
"assessments": {
"<dimension_from_query>": 0
},
"findings": [
{
"dimension": "<dimension_from_query>",
"identifier": "short_id",
"summary": "one-line defect summary",
"related_files": ["relative/path/to/file.py"],
"evidence": ["specific code observation"],
"suggestion": "concrete fix recommendation",
"confidence": "high|medium|low"
}
]
}
```
`findings` MUST match `query.system_prompt` exactly (including `related_files`, `evidence`, and `suggestion`). Use `"findings": []` when no defects found. Import is fail-closed: invalid findings abort unless `--allow-partial` is passed. Assessment scores are auto-applied from trusted internal or cloud session imports. Legacy `--attested-external` remains supported.
#### Import paths
- Robust session flow (recommended): `desloppify review --external-start --external-runner claude` → use generated prompt/template → run printed `--external-submit` command.
- Durable scored import (legacy): `desloppify review --import findings.json --attested-external --attest "I validated this review was completed without awareness of overall score and is unbiased."`
- Findings-only fallback: `desloppify review --import findings.json`
#### Reviewer agent prompt
Runners that support agent definitions (Cursor, Copilot, Gemini) can create a dedicated reviewer agent. Use this system prompt:
```
You are a code quality reviewer. You will be given a codebase path, a set of
dimensions to score, and what each dimension means. Read the code, score each
dimension 0-100 from evidence only, and return JSON in the required format.
Do not anchor to target thresholds. When evidence is mixed, score lower and
explain uncertainty.
```
See your editor's overlay section below for the agent config format.
### Plan commands
```bash
desloppify plan reorder <cluster> top # move all cluster members at once
desloppify plan reorder <a> <b> top # mix clusters + findings in one reorder
desloppify plan reorder <pat> before -t X # position relative to another item/cluster
desloppify plan cluster reorder a,b top # reorder multiple clusters as one block
desloppify plan resolve <pat> # mark complete
desloppify plan reopen <pat> # reopen
desloppify backlog # broader non-execution backlog
```
### Commit tracking
```bash
desloppify plan commit-log # see uncommitted + committed status
desloppify plan commit-log record # record HEAD commit, update PR description
desloppify plan commit-log record --note "why" # with rationale
desloppify plan commit-log record --only "smells::*" # record specific findings only
desloppify plan commit-log history # show commit records
desloppify plan commit-log pr # preview PR body markdown
desloppify config set commit_tracking_enabled false # disable guidance
```
After resolving findings as `fixed`, the tool shows uncommitted work, committed history, and a suggested commit message. After committing externally, run `record` to move findings from uncommitted to committed and auto-update the linked PR description.
### Quick reference
```bash
desloppify next --count 5 # top 5 execution items
desloppify next --cluster <name> # drill into a cluster
desloppify backlog --count 5 # top 5 backlog items outside execution
desloppify show <pattern> # filter by file/detector/ID
desloppify show --status open # all open findings
desloppify plan skip --permanent "<id>" --note "reason" --attest "..." # accept debt
desloppify exclude <path> # exclude a directory from scanning
desloppify config show # show all config including excludes
desloppify scan --path . --reset-subjective # reset subjective baseline to 0
```
## 4. Fix Tool Issues Upstream
When desloppify itself appears wrong or inconsistent — a bug, a bad detection, a crash, confusing output — **fix it and open a PR**. If you can't confidently fix it, file an issue instead.
### Fix and PR (preferred)
Clone the tool repo to a temp directory, make the fix there, and verify it works against the project you're scanning before pushing.
```bash
git clone https://github.com/peteromallet/desloppify.git /tmp/desloppify-fix
cd /tmp/desloppify-fix
git checkout -b fix/<short-description>
```
Make your changes, then run the test suite and verify the fix against the original project:
```bash
python -m pytest desloppify/tests/ -q
python -m desloppify scan --path <project-root> # the project you were scanning
```
Once it looks good, push and open a PR:
```bash
git add <files> && git commit -m "fix: <what and why>"
git push -u origin fix/<short-description>
gh pr create --title "fix: <short description>" --body "$(cat <<'EOF'
## Problem
<what went wrong — include the command and output>
## Fix
<what you changed and why>
EOF
)"
```
Clean up after: `rm -rf /tmp/desloppify-fix`
### File an issue (fallback)
If the fix is unclear or the change needs discussion, open an issue at `https://github.com/peteromallet/desloppify/issues` with a minimal repro: command, path, expected output, actual output.
## Prerequisite
`command -v desloppify >/dev/null 2>&1 && echo "desloppify: installed" || echo "NOT INSTALLED — run: pip install --upgrade git+https://github.com/peteromallet/desloppify.git"`
<!-- desloppify-end -->
## Codex Overlay
This is the canonical Codex overlay used by the README install command.
1. Prefer first-class batch runs: `desloppify review --run-batches --runner codex --parallel --scan-after-import`.
2. The command writes immutable packet snapshots under `.desloppify/review_packets/holistic_packet_*.json`; use those for reproducible retries.
3. Keep reviewer input scoped to the immutable packet and the source files named in each batch.
4. If a batch fails, retry only that slice with `desloppify review --run-batches --packet <packet.json> --only-batches <idxs>`.
5. Manual override is safety-scoped: you cannot combine it with `--allow-partial`, and provisional manual scores expire on the next `scan` unless replaced by trusted internal or attested-external imports.
### Triage workflow
Prefer automated triage: `desloppify plan triage --run-stages --runner codex`
Options: `--only-stages observe,reflect` (subset), `--dry-run` (prompts only), `--stage-timeout-seconds N` (per-stage).
Run artifacts go to `.desloppify/triage_runs/<timestamp>/` — each run gets its own directory with `run.log` (live timestamped events), `run_summary.json`, per-stage `prompts/`, `output/`, and `logs/`. Check `run.log` to diagnose stalls or failures. Re-running resumes from the last confirmed stage.
If automated triage stalls, check `run.log` for the last event, then use `desloppify plan triage --stage-prompt <stage>` to get the full prompt with gate rules.
<!-- desloppify-overlay: codex -->
<!-- desloppify-end -->Related Skills
debug-nw
Debug a Nativewind v5 setup issue. Walks through common configuration problems with metro, babel, postcss, and dependencies.
Go Production Engineering
You are a Go production engineering expert. Follow this system for every Go project — from architecture decisions through production deployment. Apply phases sequentially for new projects; use individual phases as needed for existing codebases.
Database Engineering Mastery
> Complete database design, optimization, migration, and operations system. From schema design to production monitoring — covers PostgreSQL, MySQL, SQLite, and general SQL patterns.
afrexai-code-reviewer
Enterprise-grade code review agent. Reviews PRs, diffs, or code files for security vulnerabilities, performance issues, error handling gaps, architecture smells, and test coverage. Works with any language, any repo, no dependencies required.
API Documentation Generator
Generate production-ready API documentation from endpoint descriptions. Outputs OpenAPI 3.0, markdown reference docs, and SDK quickstart guides.
bili-rs
Development skill for bili-rs, a Rust CLI tool for Bilibili (B站). Use when implementing features, fixing bugs, or extending the bilibili-cli-rust codebase. Provides architecture conventions, API endpoints, coding patterns, and project-specific constraints. Triggers on tasks involving adding CLI commands, calling Bilibili APIs, handling authentication, implementing output formatting, or working with the layered cli/commands/client/payloads architecture.
Puppeteer
Automate Chrome and Chromium with Puppeteer for scraping, testing, screenshots, and browser workflows.
pharaoh
Codebase knowledge graph with 23 development workflow skills. Query architecture, dependencies, blast radius, dead code, and test coverage via MCP. Requires GitHub App installation (read-only repo access) and OAuth authentication. Connects to external MCP server at mcp.pharaoh.so.
git-commit-helper
Generate standardized git commit messages following Conventional Commits format. Use this skill when the user asks to commit code, write a commit message, or create a git commit. Enforces team conventions for type prefixes, scope naming, message length, and breaking change documentation.
ask-claude
Delegate a task to Claude Code CLI and immediately report the result back in chat. Supports persistent sessions with full context memory. Safe execution: no data exfiltration, no external calls, file operations confined to workspace. Use when the user asks to run Claude, delegate a coding task, continue a previous Claude session, or any task benefiting from Claude Code's tools (file editing, code analysis, bash, etc.).
bnbchain-mcp
Interact with the BNB Chain Model Context Protocol (MCP) server. Blocks, contracts, tokens, NFTs, wallet, Greenfield, and ERC-8004 agent tools. Use npx @bnb-chain/mcp@latest or read the official skill page.
helius-phantom
Build frontend Solana applications with Phantom Connect SDK and Helius infrastructure. Covers React, React Native, and browser SDK integration, transaction signing via Helius Sender, API key proxying, token gating, NFT minting, crypto payments, real-time updates, and secure frontend architecture.