analyze-impact

Pre-implementation impact analysis for a plan phase. Analyzes dependencies, test coverage, breaking change risk, and affected modules before coding begins. Produces an impact report that prevents surprises during implementation.

23 stars

Best use case

analyze-impact is best used when you need a repeatable AI agent workflow instead of a one-off prompt.

Pre-implementation impact analysis for a plan phase. Analyzes dependencies, test coverage, breaking change risk, and affected modules before coding begins. Produces an impact report that prevents surprises during implementation.

Teams using analyze-impact should expect a more consistent output, faster repeated execution, less prompt rewriting.

When to use this skill

  • You want a reusable workflow that can be run more than once with consistent structure.

When not to use this skill

  • You only need a quick one-off answer and do not need a reusable workflow.
  • You cannot install or maintain the underlying files, dependencies, or repository context.

Installation

Claude Code / Cursor / Codex

$curl -o ~/.claude/skills/analyze-impact/SKILL.md --create-dirs "https://raw.githubusercontent.com/christophacham/agent-skills-library/main/skills/ai-ml/analyze-impact/SKILL.md"

Manual Installation

  1. Download SKILL.md from GitHub
  2. Place it in .claude/skills/analyze-impact/SKILL.md inside your project
  3. Restart your AI agent — it will auto-discover the skill

How analyze-impact Compares

Feature / Agentanalyze-impactStandard Approach
Platform SupportNot specifiedLimited / Varies
Context Awareness High Baseline
Installation ComplexityUnknownN/A

Frequently Asked Questions

What does this skill do?

Pre-implementation impact analysis for a plan phase. Analyzes dependencies, test coverage, breaking change risk, and affected modules before coding begins. Produces an impact report that prevents surprises during implementation.

Where can I find the source code?

You can find the source code on GitHub using the link provided at the top of the page.

SKILL.md Source

# Skill: Analyze Impact

## What This Skill Does

Answers the question: **"What will break if I implement this phase?"** — before writing any code.

Performs a structured impact analysis by cross-referencing a phase's implementation plan against the current codebase. Identifies:

- **Affected modules** and their dependencies
- **Test coverage** of the areas to be changed
- **Breaking change risk** (public API modifications, data model changes)
- **Missing prerequisites** (dependencies not yet installed, config not yet set up)
- **Cross-module ripple effects** (changes that cascade through imports/references)

## When to Use

- Before starting implementation of a plan phase
- As an extension of `resume-plan` — run analyze-impact after the briefing, before coding
- When a phase feels risky and you want a structured pre-check
- When the implementation plan references modules you haven't read yet

Do NOT use this for general codebase exploration — use `generate-docs` for that.

## Execution Model

- **Always**: the primary agent runs this skill directly.
- **Rationale**: impact analysis feeds directly into implementation decisions. The primary agent needs this context to guide coding. Delegating would build context in the wrong place.
- **Output**: `plans/<name>/implementation/phase-N-impact.md` (persistent, referenced by the implementation plan).

## Workflow

### Step 1: Identify the Phase

Determine which plan and phase to analyze:

1. Check `plans/` for active plans
2. Read the active plan's `todo.md` to find the current phase
3. Read the phase document (`phases/phase-N.md`) for scope and deliverables
4. Read the implementation plan (`implementation/phase-N-impl.md`) for technical approach

If no active plan exists, use the `question` tool to ask what the user is planning to implement.

### Step 2: Map Affected Files

From the implementation plan's "Affected Modules" and "Implementation Steps" sections, identify which files will be changed:

1. List the modules mentioned in the implementation plan
2. For each module, read the module documentation's Structure section (if docs exist) to get the file inventory
3. If no docs exist, use `find` or `ls` to map the module's file structure

Produce a list of files likely to be modified, created, or deleted.

### Step 3: Analyze Dependencies

For each affected file, check what depends on it:

```bash
# Find files that import/reference the affected module
grep -r "import.*from.*<module>" --include="*.ts" --include="*.py" -l
grep -r "require.*<module>" --include="*.js" -l
```

Build a dependency graph scoped to the affected files:

- **Upstream dependencies**: what the affected files import (will they still work?)
- **Downstream dependents**: what imports the affected files (will THEY still work?)

### Step 4: Assess Breaking Change Risk

For each affected file, check if the planned changes touch public interfaces:

- **Exported functions/classes**: signature changes, renamed exports, removed exports
- **API endpoints**: changed routes, modified request/response schemas
- **Data models**: schema changes, migration requirements
- **Configuration**: changed config keys, new required config values

Rate the risk: Low (internal only) / Medium (public API, backward compatible) / High (breaking change)

### Step 5: Check Test Coverage

Assess whether the areas to be changed have existing test coverage:

```bash
# Find tests related to affected modules
find . -path "*/test*" -name "*<module>*" -o -path "*/spec*" -name "*<module>*"
```

For each affected area:

- Are there existing tests? (Covered / Uncovered)
- Will the planned changes break existing tests? (Likely / Unlikely)
- Are new tests needed? (Yes / No, with rationale)

### Step 6: Check Prerequisites

Verify that the implementation plan's prerequisites are met:

- **Dependencies**: are required packages installed?
- **Configuration**: are required config values set?
- **Infrastructure**: are required services available?
- **Prior phases**: are previous phases actually completed?

Cross-reference against `todo.md` completed items.

### Step 7: Generate Impact Report

Write the impact report to `plans/<name>/implementation/phase-N-impact.md`:

```markdown
# Phase N Impact Analysis

## Summary

| Metric | Value |
|--------|-------|
| Affected Modules | N |
| Files to Modify | N |
| Files to Create | N |
| Downstream Dependents | N |
| Breaking Change Risk | Low/Medium/High |
| Test Coverage | N% estimated |

## Affected Modules

| Module | Changes | Risk | Dependents |
|--------|---------|------|------------|
| <name> | <summary> | Low/Medium/High | N files |

## Dependency Analysis

### Upstream (what we import)
<list of dependencies that must remain stable>

### Downstream (what imports us)
<list of files/modules that will be affected by our changes>

## Breaking Change Assessment

| Change | Type | Risk | Mitigation |
|--------|------|------|------------|
| <change> | API/Schema/Config | Low/Medium/High | <how to mitigate> |

## Test Impact

| Area | Current Coverage | Action Needed |
|------|-----------------|---------------|
| <area> | Covered/Partial/Missing | <action> |

## Prerequisites Check

| Prerequisite | Status |
|--------------|--------|
| <item> | Met / Missing |

## Recommendations

1. <prioritized recommendation for safe implementation>
2. <next recommendation>
```

### Step 8: Present and Discuss

Present the key findings to the user. Highlight:

- Any high-risk breaking changes that need careful handling
- Missing prerequisites that must be resolved first
- Modules with no test coverage that should be tested before changing

## Rules

1. **Analysis, not implementation**: this skill analyzes and reports. It does NOT make any code changes.
2. **Scope to the phase**: only analyze the impact of the specific phase being assessed. Do not analyze the entire plan.
3. **Concrete evidence**: every risk assessment must reference specific files, functions, or dependencies. No vague "this might be risky" statements.
4. **Dependency analysis is the core value**: the most important output is knowing what else breaks when you change the target files.
5. **Persist the report**: unlike `validate-docs` (ephemeral), the impact report is written to a file because it remains relevant throughout the phase's implementation.
6. **Don't over-read**: use `grep` for dependency tracing, not full file reads. Read only the import/export sections of files, not entire implementations.
7. **No built-in explore agent**: do NOT use the built-in `explore` subagent type.

Related Skills

genderapi-io-automation

23
from christophacham/agent-skills-library

Automate Genderapi IO tasks via Rube MCP (Composio). Always search tools first for current schemas.

gender-api-automation

23
from christophacham/agent-skills-library

Automate Gender API tasks via Rube MCP (Composio). Always search tools first for current schemas.

fred-economic-data

23
from christophacham/agent-skills-library

Query FRED (Federal Reserve Economic Data) API for 800,000+ economic time series from 100+ sources. Access GDP, unemployment, inflation, interest rates, exchange rates, housing, and regional data. Use for macroeconomic analysis, financial research, policy studies, economic forecasting, and academic research requiring U.S. and international economic indicators.

fidel-api-automation

23
from christophacham/agent-skills-library

Automate Fidel API tasks via Rube MCP (Composio). Always search tools first for current schemas.

fastapi-templates

23
from christophacham/agent-skills-library

Create production-ready FastAPI projects with async patterns, dependency injection, and comprehensive error handling. Use when building new FastAPI applications or setting up backend API projects.

fastapi-router-py

23
from christophacham/agent-skills-library

Create FastAPI routers with CRUD operations, authentication dependencies, and proper response models. Use when building REST API endpoints, creating new routes, implementing CRUD operations, or add...

fastapi-pro

23
from christophacham/agent-skills-library

Build high-performance async APIs with FastAPI, SQLAlchemy 2.0, and Pydantic V2. Master microservices, WebSockets, and modern Python async patterns.

expo-api-routes

23
from christophacham/agent-skills-library

Guidelines for creating API routes in Expo Router with EAS Hosting

esm

23
from christophacham/agent-skills-library

Comprehensive toolkit for protein language models including ESM3 (generative multimodal protein design across sequence, structure, and function) and ESM C (efficient protein embeddings and representations). Use this skill when working with protein sequences, structures, or function prediction; designing novel proteins; generating protein embeddings; performing inverse folding; or conducting protein engineering tasks. Supports both local model usage and cloud-based Forge API for scalable inference.

eodhd-apis-automation

23
from christophacham/agent-skills-library

Automate Eodhd Apis tasks via Rube MCP (Composio). Always search tools first for current schemas.

dotnet-backend

23
from christophacham/agent-skills-library

Build ASP.NET Core 8+ backend services with EF Core, auth, background jobs, and production API patterns.

dotnet-backend-patterns

23
from christophacham/agent-skills-library

Master C#/.NET backend development patterns for building robust APIs, MCP servers, and enterprise applications. Covers async/await, dependency injection, Entity Framework Core, Dapper, configuratio...