control-loop-extraction

Extract and analyze agent reasoning loops, step functions, and termination conditions. Use when needing to (1) understand how an agent framework implements reasoning (ReAct, Plan-and-Solve, Reflection, etc.), (2) locate the core decision-making logic, (3) analyze loop mechanics and termination conditions, (4) document the step-by-step execution flow of an agent, or (5) compare reasoning patterns across frameworks.

25 stars

Best use case

control-loop-extraction is best used when you need a repeatable AI agent workflow instead of a one-off prompt.

Extract and analyze agent reasoning loops, step functions, and termination conditions. Use when needing to (1) understand how an agent framework implements reasoning (ReAct, Plan-and-Solve, Reflection, etc.), (2) locate the core decision-making logic, (3) analyze loop mechanics and termination conditions, (4) document the step-by-step execution flow of an agent, or (5) compare reasoning patterns across frameworks.

Teams using control-loop-extraction should expect a more consistent output, faster repeated execution, less prompt rewriting.

When to use this skill

  • You want a reusable workflow that can be run more than once with consistent structure.

When not to use this skill

  • You only need a quick one-off answer and do not need a reusable workflow.
  • You cannot install or maintain the underlying files, dependencies, or repository context.

Installation

Claude Code / Cursor / Codex

$curl -o ~/.claude/skills/control-loop-extraction/SKILL.md --create-dirs "https://raw.githubusercontent.com/ComeOnOliver/skillshub/main/skills/aiskillstore/marketplace/dowwie/control-loop-extraction/SKILL.md"

Manual Installation

  1. Download SKILL.md from GitHub
  2. Place it in .claude/skills/control-loop-extraction/SKILL.md inside your project
  3. Restart your AI agent — it will auto-discover the skill

How control-loop-extraction Compares

Feature / Agentcontrol-loop-extractionStandard Approach
Platform SupportNot specifiedLimited / Varies
Context Awareness High Baseline
Installation ComplexityUnknownN/A

Frequently Asked Questions

What does this skill do?

Extract and analyze agent reasoning loops, step functions, and termination conditions. Use when needing to (1) understand how an agent framework implements reasoning (ReAct, Plan-and-Solve, Reflection, etc.), (2) locate the core decision-making logic, (3) analyze loop mechanics and termination conditions, (4) document the step-by-step execution flow of an agent, or (5) compare reasoning patterns across frameworks.

Where can I find the source code?

You can find the source code on GitHub using the link provided at the top of the page.

SKILL.md Source

# Control Loop Extraction

Extracts and documents the core agent reasoning loop from framework source code.

## Process

1. **Locate the loop** - Find the main agent execution loop
2. **Classify the pattern** - Identify ReAct, Plan-and-Solve, Reflection, or Tree-of-Thoughts
3. **Extract the step function** - Document the LLM → Parse → Decide flow
4. **Map termination** - Catalog all loop exit conditions

## Reasoning Pattern Identification

### Pattern Signatures

**ReAct (Reason + Act)**
```python
# Signature: Thought → Action → Observation cycle
while not done:
    thought = llm.generate(prompt)      # Reasoning
    action = parse_action(thought)       # Action selection
    observation = execute(action)        # Environment feedback
    prompt = update_prompt(observation)  # Loop continuation
```

**Plan-and-Solve**
```python
# Signature: Upfront planning, then execution
plan = llm.generate("Create a plan for...")
for step in plan.steps:
    result = execute_step(step)
    if needs_replan(result):
        plan = replan(...)
```

**Reflection**
```python
# Signature: Act → Self-critique → Adjust
while not done:
    action = llm.generate(prompt)
    result = execute(action)
    critique = llm.generate(f"Evaluate: {result}")
    if critique.needs_adjustment:
        prompt = adjust_approach(critique)
```

**Tree-of-Thoughts**
```python
# Signature: Branch → Evaluate → Select
thoughts = [generate_thought() for _ in range(n)]
scores = [evaluate(t) for t in thoughts]
best = select_best(thoughts, scores)
```

## Step Function Analysis

The "step function" is the atomic unit of agent execution. Extract:

1. **Input Assembly** - How context is constructed for the LLM
2. **LLM Invocation** - The actual model call
3. **Output Parsing** - How raw output becomes structured actions
4. **Action Dispatch** - Tool execution vs. final response routing

### Key Code Patterns

```python
# Common step function structure
def step(self, state):
    # 1. Assemble input
    messages = self._build_messages(state)
    
    # 2. Call LLM
    response = self.llm.invoke(messages)
    
    # 3. Parse output
    parsed = self._parse_response(response)
    
    # 4. Dispatch
    if parsed.is_tool_call:
        return self._execute_tool(parsed.tool, parsed.args)
    else:
        return AgentFinish(parsed.final_answer)
```

## Termination Condition Catalog

### Common Termination Patterns

| Condition | Implementation | Risk |
|-----------|----------------|------|
| Step limit | `if step_count >= max_steps` | May cut off valid execution |
| Token limit | `if total_tokens >= max_tokens` | May truncate mid-thought |
| Explicit finish | `if action.type == "finish"` | Relies on LLM cooperation |
| Timeout | `if elapsed > timeout` | Wall-clock unpredictable |
| Loop detection | `if state in seen_states` | Requires state hashing |
| Error threshold | `if error_count >= max_errors` | May exit on recoverable errors |

### Anti-Pattern: No Termination Guard

```python
# DANGEROUS: No exit condition
while True:
    result = agent.step()
    if result.is_done:  # What if LLM never outputs done?
        break
```

**Fix:** Always include a step counter:

```python
for step in range(max_steps):
    result = agent.step()
    if result.is_done:
        break
else:
    logger.warning("Hit max steps limit")
```

## Output Template

```markdown
## Control Loop Analysis: [Framework Name]

### Reasoning Topology
- **Pattern**: [ReAct | Plan-and-Solve | Reflection | Tree-of-Thoughts | Hybrid]
- **Location**: `path/to/agent.py:L45-L120`

### Step Function
- **Input Assembly**: [Description of context building]
- **LLM Call**: [Method and parameters]
- **Parser**: [How output is structured]
- **Dispatch Logic**: [Tool vs Finish decision]

### Termination Conditions
1. [Condition 1 with code reference]
2. [Condition 2 with code reference]
3. ...

### Loop Detection
- **Method**: [Heuristic | State hash | None]
- **Implementation**: [Code reference or N/A]
```

## Integration Points

- **Prerequisite**: `codebase-mapping` to identify agent files
- **Feeds into**: `comparative-matrix` for pattern comparison
- **Feeds into**: `architecture-synthesis` for new loop design

Related Skills

exa-local-dev-loop

25
from ComeOnOliver/skillshub

Configure Exa local development with hot reload, testing, and mock responses. Use when setting up a development environment, writing tests against Exa, or establishing a fast iteration cycle. Trigger with phrases like "exa dev setup", "exa local development", "exa test setup", "develop with exa", "mock exa".

evernote-local-dev-loop

25
from ComeOnOliver/skillshub

Set up efficient local development workflow for Evernote integrations. Use when configuring dev environment, setting up sandbox testing, or optimizing development iteration speed. Trigger with phrases like "evernote dev setup", "evernote local development", "evernote sandbox", "test evernote locally".

elevenlabs-local-dev-loop

25
from ComeOnOliver/skillshub

Configure local ElevenLabs development with mocking, hot reload, and audio testing. Use when setting up a dev environment for TTS/voice projects, configuring test workflows, or building a fast iteration cycle with ElevenLabs audio. Trigger: "elevenlabs dev setup", "elevenlabs local development", "elevenlabs dev environment", "develop with elevenlabs", "test elevenlabs locally".

documenso-local-dev-loop

25
from ComeOnOliver/skillshub

Set up local development environment and testing workflow for Documenso. Use when configuring dev environment, setting up test workflows, or establishing rapid iteration patterns with Documenso. Trigger with phrases like "documenso local dev", "documenso development", "test documenso locally", "documenso dev environment".

deepgram-local-dev-loop

25
from ComeOnOliver/skillshub

Configure Deepgram local development workflow with testing and mocks. Use when setting up development environment, configuring test fixtures, or establishing rapid iteration patterns for Deepgram integration. Trigger: "deepgram local dev", "deepgram development setup", "deepgram test environment", "deepgram dev workflow", "deepgram mock".

databricks-local-dev-loop

25
from ComeOnOliver/skillshub

Configure Databricks local development with Databricks Connect, Asset Bundles, and IDE. Use when setting up a local dev environment, configuring test workflows, or establishing a fast iteration cycle with Databricks. Trigger with phrases like "databricks dev setup", "databricks local", "databricks IDE", "develop with databricks", "databricks connect".

customerio-local-dev-loop

25
from ComeOnOliver/skillshub

Configure Customer.io local development workflow. Use when setting up local testing, dev/staging isolation, or mocking Customer.io for unit tests. Trigger: "customer.io local dev", "test customer.io locally", "customer.io dev environment", "customer.io sandbox", "mock customer.io".

cursor-local-dev-loop

25
from ComeOnOliver/skillshub

Optimize daily development workflow with Cursor IDE using Chat, Composer, Tab, and Git integration. Triggers on "cursor workflow", "cursor development loop", "cursor productivity", "cursor daily workflow", "cursor dev flow".

coreweave-local-dev-loop

25
from ComeOnOliver/skillshub

Set up local development workflow for CoreWeave GPU deployments. Use when building containers locally, testing YAML manifests, or iterating on model serving configurations before deploying. Trigger with phrases like "coreweave dev setup", "coreweave local testing", "develop for coreweave", "coreweave container build".

cohere-local-dev-loop

25
from ComeOnOliver/skillshub

Configure Cohere local development with mocking, testing, and hot reload. Use when setting up a development environment, configuring test workflows, or establishing a fast iteration cycle with Cohere API v2. Trigger with phrases like "cohere dev setup", "cohere local development", "cohere dev environment", "develop with cohere", "mock cohere".

coderabbit-local-dev-loop

25
from ComeOnOliver/skillshub

Configure CodeRabbit CLI for local pre-commit code reviews and fast iteration. Use when setting up local development with CodeRabbit CLI reviews, integrating AI review into your commit workflow, or testing config changes. Trigger with phrases like "coderabbit dev setup", "coderabbit local development", "coderabbit CLI workflow", "coderabbit pre-commit review".

clickup-local-dev-loop

25
from ComeOnOliver/skillshub

Set up local development for ClickUp API integrations with testing, mocking, and hot reload. Trigger: "clickup dev setup", "clickup local development", "clickup dev environment", "develop with clickup", "clickup testing setup", "mock clickup API".