component-model-analysis
Evaluate extensibility patterns, abstraction layers, and configuration approaches in frameworks. Use when (1) assessing base class/protocol design, (2) understanding dependency injection patterns, (3) evaluating plugin/extension systems, (4) comparing code-first vs config-first approaches, or (5) determining framework flexibility for customization.
Best use case
component-model-analysis is best used when you need a repeatable AI agent workflow instead of a one-off prompt. It is especially useful for teams working in multi. Evaluate extensibility patterns, abstraction layers, and configuration approaches in frameworks. Use when (1) assessing base class/protocol design, (2) understanding dependency injection patterns, (3) evaluating plugin/extension systems, (4) comparing code-first vs config-first approaches, or (5) determining framework flexibility for customization.
Evaluate extensibility patterns, abstraction layers, and configuration approaches in frameworks. Use when (1) assessing base class/protocol design, (2) understanding dependency injection patterns, (3) evaluating plugin/extension systems, (4) comparing code-first vs config-first approaches, or (5) determining framework flexibility for customization.
Users should expect a more consistent workflow output, faster repeated execution, and less time spent rewriting prompts from scratch.
Practical example
Example input
Use the "component-model-analysis" skill to help with this workflow task. Context: Evaluate extensibility patterns, abstraction layers, and configuration approaches in frameworks. Use when (1) assessing base class/protocol design, (2) understanding dependency injection patterns, (3) evaluating plugin/extension systems, (4) comparing code-first vs config-first approaches, or (5) determining framework flexibility for customization.
Example output
A structured workflow result with clearer steps, more consistent formatting, and an output that is easier to reuse in the next run.
When to use this skill
- Use this skill when you want a reusable workflow rather than writing the same prompt again and again.
When not to use this skill
- Do not use this when you only need a one-off answer and do not need a reusable workflow.
- Do not use it if you cannot install or maintain the related files, repository context, or supporting tools.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/component-model-analysis/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How component-model-analysis Compares
| Feature / Agent | component-model-analysis | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | Unknown | N/A |
Frequently Asked Questions
What does this skill do?
Evaluate extensibility patterns, abstraction layers, and configuration approaches in frameworks. Use when (1) assessing base class/protocol design, (2) understanding dependency injection patterns, (3) evaluating plugin/extension systems, (4) comparing code-first vs config-first approaches, or (5) determining framework flexibility for customization.
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
Related Guides
AI Agent for Product Research
Browse AI agent skills for product research, competitive analysis, customer discovery, and structured product decision support.
AI Agent for SaaS Idea Validation
Use AI agent skills for SaaS idea validation, market research, customer discovery, competitor analysis, and documenting startup hypotheses.
SKILL.md Source
# Component Model Analysis
Evaluates extensibility patterns and configuration approaches.
## Process
1. **Identify base classes** — Find BaseLLM, BaseTool, BaseAgent, etc.
2. **Classify abstraction depth** — Thick (lots of logic) vs thin (interfaces)
3. **Analyze DI patterns** — Constructor, factory, registry, container
4. **Document configuration** — Code-first, config-first, or hybrid
## Abstraction Layer Assessment
### Thick Abstractions
```python
class BaseLLM(ABC):
"""Many methods, lots of inherited behavior"""
def __init__(self, model: str, temperature: float = 0.7):
self.model = model
self.temperature = temperature
self._cache = {}
def generate(self, prompt: str) -> str:
cached = self._check_cache(prompt)
if cached:
return cached
result = self._generate_impl(prompt)
self._update_cache(prompt, result)
return self._postprocess(result)
@abstractmethod
def _generate_impl(self, prompt: str) -> str: ...
def _check_cache(self, prompt): ...
def _update_cache(self, prompt, result): ...
def _postprocess(self, result): ...
def stream(self, prompt): ...
def batch(self, prompts): ...
# ... 15+ more methods
```
**Characteristics**:
- Deep inheritance trees (3+ levels)
- Many non-abstract methods
- Shared state/caching logic
- Hard to understand full behavior
### Thin Abstractions (Protocols)
```python
from typing import Protocol
class LLM(Protocol):
"""Minimal interface contract"""
def generate(self, messages: list[Message]) -> str: ...
class StreamingLLM(Protocol):
def stream(self, messages: list[Message]) -> Iterator[str]: ...
```
**Characteristics**:
- Pure interfaces
- No inherited behavior
- Duck typing compatible
- Easy to mock/test
### Mixed Approach
```python
class LLMBase(ABC):
"""Some shared logic, but minimal"""
@abstractmethod
def generate(self, messages: list) -> str: ...
def generate_with_retry(self, messages: list, retries: int = 3) -> str:
"""Optional convenience method"""
for i in range(retries):
try:
return self.generate(messages)
except RateLimitError:
time.sleep(2 ** i)
raise
```
## Dependency Injection Patterns
### Constructor Injection
```python
class Agent:
def __init__(
self,
llm: LLM,
tools: list[Tool],
memory: Memory | None = None
):
self.llm = llm
self.tools = tools
self.memory = memory or InMemoryStore()
```
**Pros**: Explicit, testable, IDE support
**Cons**: Verbose construction, manual wiring
### Factory Pattern
```python
class Agent:
@classmethod
def from_config(cls, config: AgentConfig) -> "Agent":
llm = LLMFactory.create(config.llm)
tools = [ToolFactory.create(t) for t in config.tools]
return cls(llm=llm, tools=tools)
@classmethod
def from_yaml(cls, path: str) -> "Agent":
config = yaml.safe_load(open(path))
return cls.from_config(AgentConfig(**config))
```
**Pros**: Flexible construction, config-driven
**Cons**: Hidden dependencies, magic
### Global Registry
```python
TOOL_REGISTRY: dict[str, type[Tool]] = {}
def register_tool(name: str):
def decorator(cls):
TOOL_REGISTRY[name] = cls
return cls
return decorator
@register_tool("search")
class SearchTool(Tool): ...
# Usage
tool = TOOL_REGISTRY["search"]()
```
**Pros**: Plugin-friendly, discoverable
**Cons**: Global state, harder to test, implicit
### Container-Based DI
```python
from dependency_injector import containers, providers
class Container(containers.DeclarativeContainer):
config = providers.Configuration()
llm = providers.Singleton(
OpenAI,
api_key=config.openai.api_key
)
agent = providers.Factory(
Agent,
llm=llm
)
```
**Pros**: Full lifecycle control, scopes
**Cons**: Complex, learning curve
## Configuration Strategy
### Code-First
```python
agent = Agent(
llm=OpenAI(model="gpt-4", temperature=0.7),
tools=[SearchTool(), CalculatorTool()],
max_steps=10
)
```
**Characteristics**: Type-safe, IDE completion, refactorable
### Config-First
```yaml
# agent.yaml
llm:
provider: openai
model: gpt-4
temperature: 0.7
tools:
- search
- calculator
max_steps: 10
```
```python
agent = Agent.from_yaml("agent.yaml")
```
**Characteristics**: Non-developer friendly, runtime changes, less type safety
### Hybrid
```python
# Base config from file
base = AgentConfig.from_yaml("agent.yaml")
# Code overrides
agent = Agent(
**base.dict(),
llm=CustomLLM() # Override specific component
)
```
## Output Template
```markdown
## Component Model Analysis: [Framework Name]
### Abstraction Assessment
| Component | Base Class | Depth | Type |
|-----------|-----------|-------|------|
| LLM | BaseLLM | 3 levels | Thick |
| Tool | BaseTool | 2 levels | Mixed |
| Memory | Protocol | 0 levels | Thin |
### Dependency Injection
- **Primary Pattern**: [Constructor/Factory/Registry/Container]
- **Testability**: [Easy/Medium/Hard]
- **Configuration**: [Code/Config/Hybrid]
### Extension Points
| Extension | Mechanism | Difficulty |
|-----------|-----------|------------|
| Custom LLM | Inherit BaseLLM | Medium |
| Custom Tool | @register_tool | Easy |
| Custom Memory | Implement Protocol | Easy |
### Configuration
- **Strategy**: [Code-first/Config-first/Hybrid]
- **Formats**: [Python/YAML/JSON/TOML]
- **Validation**: [Pydantic/Manual/None]
### Recommendations
- [List any concerns or suggestions]
```
## Integration
- **Prerequisite**: `codebase-mapping` to identify base classes
- **Feeds into**: `comparative-matrix` for extensibility decisions
- **Related**: `antipattern-catalog` for inheritance issuesRelated Skills
web-component-design
Master React, Vue, and Svelte component patterns including CSS-in-JS, composition strategies, and reusable component architecture. Use when building UI component libraries, designing component APIs, or implementing frontend design systems.
next-cache-components
Next.js 16 Cache Components - PPR, use cache directive, cacheLife, cacheTag, updateTag
ui-component-patterns
Build reusable, maintainable UI components following modern design patterns. Use when creating component libraries, implementing design systems, or building scalable frontend architectures. Handles React patterns, composition, prop design, TypeScript, and component best practices.
log-analysis
Analyze application logs to identify errors, performance issues, and security anomalies. Use when debugging issues, monitoring system health, or investigating incidents. Handles various log formats including Apache, Nginx, application logs, and JSON logs.
wireshark-network-traffic-analysis
This skill should be used when the user asks to "analyze network traffic with Wireshark", "capture packets for troubleshooting", "filter PCAP files", "follow TCP/UDP streams", "detect network anomalies", "investigate suspicious traffic", or "perform protocol analysis". It provides comprehensive techniques for network packet capture, filtering, and analysis using Wireshark.
wireshark-analysis
This skill should be used when the user asks to "analyze network traffic with Wireshark", "capture packets for troubleshooting", "filter PCAP files", "follow TCP/UDP streams", "dete...
threat-modeling-expert
Expert in threat modeling methodologies, security architecture review, and risk assessment. Masters STRIDE, PASTA, attack trees, and security requirement extraction. Use for security architecture reviews, threat identification, and secure-by-design planning.
team-composition-analysis
This skill should be used when the user asks to "plan team structure", "determine hiring needs", "design org chart", "calculate compensation", "plan equity allocation", or requests organizational design and headcount planning for a startup.
stride-analysis-patterns
Apply STRIDE methodology to systematically identify threats. Use when analyzing system security, conducting threat modeling sessions, or creating security documentation.
startup-financial-modeling
This skill should be used when the user asks to "create financial projections", "build a financial model", "forecast revenue", "calculate burn rate", "estimate runway", "model cash flow", or requests 3-5 year financial planning for a startup.
pydantic-models-py
Create Pydantic models following the multi-model pattern with Base, Create, Update, Response, and InDB variants. Use when defining API request/response schemas, database models, or data validation in Python applications using Pydantic v2.
market-sizing-analysis
This skill should be used when the user asks to "calculate TAM", "determine SAM", "estimate SOM", "size the market", "calculate market opportunity", "what's the total addressable market", or requests market sizing analysis for a startup or business opportunity.