codebase-analysis
Systematically analyze codebase structure, complexity, dependencies, and architectural patterns to understand project organization
Best use case
codebase-analysis is best used when you need a repeatable AI agent workflow instead of a one-off prompt.
Systematically analyze codebase structure, complexity, dependencies, and architectural patterns to understand project organization
Teams using codebase-analysis should expect a more consistent output, faster repeated execution, less prompt rewriting.
When to use this skill
- You want a reusable workflow that can be run more than once with consistent structure.
When not to use this skill
- You only need a quick one-off answer and do not need a reusable workflow.
- You cannot install or maintain the underlying files, dependencies, or repository context.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/codebase-analysis/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How codebase-analysis Compares
| Feature / Agent | codebase-analysis | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | Unknown | N/A |
Frequently Asked Questions
What does this skill do?
Systematically analyze codebase structure, complexity, dependencies, and architectural patterns to understand project organization
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
SKILL.md Source
# Codebase Analysis Skill
## Objective
Perform comprehensive, systematic analysis of project codebases to understand:
- Project structure and organization
- Technology stack and dependencies
- Architectural patterns and conventions
- Code complexity and quality metrics
- Key components and their relationships
## When to Use This Skill
Auto-invoke when:
- Starting work on a new project
- User asks to "analyze", "review", "audit", or "understand" the codebase
- Before making architectural decisions
- Planning refactoring or major changes
- Onboarding new developers
## Analysis Methodology
### Phase 1: Discovery (Project Structure)
**Goal**: Map the high-level project organization
**Tools**: Glob, LS, Read
**Process**:
1. **Identify project type** by reading `package.json`, `tsconfig.json`, or framework-specific configs
2. **Map directory structure** using LS at root level:
```
Key directories to identify:
- Source code: src/, app/, pages/, components/
- Tests: __tests__/, tests/, *.test.*, *.spec.*
- Config: config/, .config/
- Documentation: docs/, README.md
- Build output: dist/, build/, .next/
```
3. **Scan for important files**:
- Build configs: `vite.config.*, webpack.config.*, next.config.*`
- TypeScript: `tsconfig.json`, `tsconfig.*.json`
- Package management: `package.json`, `package-lock.json`, `yarn.lock`, `pnpm-lock.yaml`
- Environment: `.env*`, `.env.example`
- Git: `.gitignore`, `.git/`
### Phase 2: Technology Stack Analysis
**Goal**: Identify frameworks, libraries, and versions
**Tools**: Read, Grep
**Process**:
1. **Read package.json**:
- Extract `dependencies` (runtime libraries)
- Extract `devDependencies` (development tools)
- Note `scripts` (available commands)
- Check `engines` (Node.js version requirements)
2. **Identify framework**:
- Next.js: Check for `next` in dependencies, `next.config.*`, `app/` or `pages/` directory
- React: Check for `react` and `react-dom`
- Vue: Check for `vue`, `*.vue` files
- Svelte: Check for `svelte`, `*.svelte` files
- Angular: Check for `@angular/core`, `angular.json`
3. **Identify key libraries**:
- State management: Redux, Zustand, MobX, Pinia
- Routing: react-router, vue-router, next/navigation
- UI libraries: MUI, Ant Design, shadcn/ui, Chakra UI
- Styling: Tailwind CSS, styled-components, emotion, CSS modules
- Testing: Vitest, Jest, Playwright, Cypress
- Build tools: Vite, Webpack, esbuild, Turbopack
### Phase 3: Architecture Pattern Analysis
**Goal**: Understand code organization and patterns
**Tools**: Grep, Glob, Read
**Process**:
1. **Component patterns** (for React/Vue/Svelte):
```
Use Glob to find: **/*.{jsx,tsx,vue,svelte}
Analyze:
- Component naming conventions
- File structure (co-located styles, tests)
- Component size (lines of code)
```
2. **API/Backend patterns**:
```
Use Grep to search for:
- API routes: "export.*GET|POST|PUT|DELETE"
- Database queries: "prisma\.|mongoose\.|sql"
- Authentication: "auth|jwt|session"
```
3. **State management patterns**:
```
Use Grep to find:
- Context API: "createContext|useContext"
- Redux: "createSlice|useSelector"
- Zustand: "create.*useStore"
```
4. **File organization patterns**:
- Monorepo: Check for `packages/`, `apps/`, `turbo.json`, `nx.json`
- Feature-based: Check for directories like `features/`, `modules/`
- Layer-based: Check for `components/`, `services/`, `utils/`, `hooks/`
### Phase 4: Code Quality & Complexity Assessment
**Goal**: Identify potential issues and technical debt
**Tools**: Grep, Bash, Read
**Process**:
1. **Linting & Formatting**:
- Check for: `.eslintrc*`, `.prettierrc*`, `biome.json`
- Run linter if available: `npm run lint` (via Bash)
2. **Testing coverage**:
- Find test files: Use Glob for `**/*.{test,spec}.{js,ts,jsx,tsx}`
- Calculate coverage: Run `npm run test:coverage` if available
3. **TypeScript strictness**:
- Read `tsconfig.json`
- Check `strict: true`, `strictNullChecks`, etc.
- Look for `@ts-ignore` or `any` usage (Grep)
4. **Code complexity indicators**:
```
Use Grep to flag potential issues:
- Large files: Find files > 500 lines
- Deep nesting: Search for excessive indentation
- TODO/FIXME comments: Grep for "TODO|FIXME|HACK"
- Console logs: Grep for "console\.(log|debug|warn)"
```
### Phase 5: Dependency & Security Analysis
**Goal**: Identify outdated or vulnerable dependencies
**Tools**: Bash, Read
**Process**:
1. **Check for lock files**:
- Presence of `package-lock.json`, `yarn.lock`, `pnpm-lock.yaml`
2. **Run security audit** (if npm/pnpm available):
```bash
npm audit --json
# or
pnpm audit --json
```
3. **Check for outdated dependencies**:
```bash
npm outdated
```
## Output Format
Provide a structured analysis report:
```markdown
# Codebase Analysis Report
## Project Overview
- **Name**: [project name from package.json]
- **Type**: [framework/library]
- **Version**: [version]
- **Node.js**: [required version]
## Technology Stack
### Core Framework
- [Framework name & version]
### Key Dependencies
- UI: [library]
- State: [library]
- Routing: [library]
- Styling: [library]
- Testing: [library]
### Build Tools
- [Vite/Webpack/etc]
## Architecture
### Directory Structure
```
[tree-like representation of key directories]
```
### Patterns Identified
- [Component patterns]
- [State management approach]
- [API structure]
- [File organization]
## Code Quality Metrics
- **TypeScript**: [strict/loose/none]
- **Linting**: [ESLint/Biome/none]
- **Testing**: [X test files found, coverage: Y%]
- **Code Issues**: [TODOs: X, Console logs: Y]
## Recommendations
1. [Priority recommendation]
2. [Next priority]
3. ...
## Risk Areas
- [Potential issues or technical debt]
## Next Steps
- [Suggested actions based on analysis]
```
## Best Practices
1. **Progressive Detail**: Start with high-level overview, dive deeper only when needed
2. **Context Window Management**: For large codebases, analyze in chunks (by directory/feature)
3. **Tool Selection**:
- Use Glob for file discovery (faster than find)
- Use Grep for pattern search (faster than reading all files)
- Use Read only for critical files (package.json, configs)
4. **Time Efficiency**: Complete analysis in < 60 seconds for typical projects
5. **Actionable Insights**: Always provide specific, actionable recommendations
## Integration with Other Skills
This skill works well with:
- `quality-gates` - Use analysis results to run appropriate quality checks
- `project-initialization` - Compare against templates to identify missing setup
- `refactoring-safe` - Identify refactoring opportunities
- Framework-specific skills (`nextjs-optimization`, `react-patterns`) - Auto-invoke based on detected framework
## Error Handling
If analysis cannot complete:
1. **Missing dependencies**: Suggest running `npm install`
2. **Corrupted files**: Report specific files and continue with partial analysis
3. **Large codebase**: Switch to targeted analysis mode (specific directories only)
4. **Permission issues**: Request necessary file access permissions
## Version History
- **1.0.0** (2025-01-03): Initial skill creation with progressive disclosure supportRelated Skills
Betting Analysis
Before writing queries, consult `references/api-reference.md` for odds formats, command parameters, and key concepts.
performing-regression-analysis
This skill empowers Claude to perform regression analysis and modeling using the regression-analysis-tool plugin. It analyzes datasets, generates appropriate regression models (linear, polynomial, etc.), validates the models, and provides performance metrics. Use this skill when the user explicitly requests regression analysis, prediction based on data, or mentions terms like "linear regression," "polynomial regression," "regression model," or "predictive modeling." This skill is also helpful when the user needs to understand the relationship between variables in a dataset.
regression-analysis-helper
Regression Analysis Helper - Auto-activating skill for Data Analytics. Triggers on: regression analysis helper, regression analysis helper Part of the Data Analytics skill category.
log-analysis-security
Log Analysis Security - Auto-activating skill for Security Advanced. Triggers on: log analysis security, log analysis security Part of the Security Advanced skill category.
impact-analysis-helper
Impact Analysis Helper - Auto-activating skill for Enterprise Workflows. Triggers on: impact analysis helper, impact analysis helper Part of the Enterprise Workflows skill category.
funnel-analysis-builder
Funnel Analysis Builder - Auto-activating skill for Data Analytics. Triggers on: funnel analysis builder, funnel analysis builder Part of the Data Analytics skill category.
cursor-codebase-indexing
Set up and optimize Cursor codebase indexing for semantic code search and @Codebase queries. Triggers on "cursor index", "codebase indexing", "index codebase", "cursor semantic search", "@codebase", "cursor embeddings".
cohort-analysis-creator
Cohort Analysis Creator - Auto-activating skill for Data Analytics. Triggers on: cohort analysis creator, cohort analysis creator Part of the Data Analytics skill category.
churn-analysis-helper
Churn Analysis Helper - Auto-activating skill for Data Analytics. Triggers on: churn analysis helper, churn analysis helper Part of the Data Analytics skill category.
project-workflow-analysis-blueprint-generator
Comprehensive technology-agnostic prompt generator for documenting end-to-end application workflows. Automatically detects project architecture patterns, technology stacks, and data flow patterns to generate detailed implementation blueprints covering entry points, service layers, data access, error handling, and testing approaches across multiple technologies including .NET, Java/Spring, React, and microservices architectures.
generate-custom-instructions-from-codebase
Migration and code evolution instructions generator for GitHub Copilot. Analyzes differences between two project versions (branches, commits, or releases) to create precise instructions allowing Copilot to maintain consistency during technology migrations, major refactoring, or framework version upgrades.
datanalysis-credit-risk
Credit risk data cleaning and variable screening pipeline for pre-loan modeling. Use when working with raw credit data that needs quality assessment, missing value analysis, or variable selection before modeling. it covers data loading and formatting, abnormal period filtering, missing rate calculation, high-missing variable removal,low-IV variable filtering, high-PSI variable removal, Null Importance denoising, high-correlation variable removal, and cleaning report generation. Applicable scenarios arecredit risk data cleaning, variable screening, pre-loan modeling preprocessing.