control-loop-extraction
Extract and analyze agent reasoning loops, step functions, and termination conditions. Use when needing to (1) understand how an agent framework implements reasoning (ReAct, Plan-and-Solve, Reflection, etc.), (2) locate the core decision-making logic, (3) analyze loop mechanics and termination conditions, (4) document the step-by-step execution flow of an agent, or (5) compare reasoning patterns across frameworks.
Best use case
control-loop-extraction is best used when you need a repeatable AI agent workflow instead of a one-off prompt. It is especially useful for teams working in multi. Extract and analyze agent reasoning loops, step functions, and termination conditions. Use when needing to (1) understand how an agent framework implements reasoning (ReAct, Plan-and-Solve, Reflection, etc.), (2) locate the core decision-making logic, (3) analyze loop mechanics and termination conditions, (4) document the step-by-step execution flow of an agent, or (5) compare reasoning patterns across frameworks.
Extract and analyze agent reasoning loops, step functions, and termination conditions. Use when needing to (1) understand how an agent framework implements reasoning (ReAct, Plan-and-Solve, Reflection, etc.), (2) locate the core decision-making logic, (3) analyze loop mechanics and termination conditions, (4) document the step-by-step execution flow of an agent, or (5) compare reasoning patterns across frameworks.
Users should expect a more consistent workflow output, faster repeated execution, and less time spent rewriting prompts from scratch.
Practical example
Example input
Use the "control-loop-extraction" skill to help with this workflow task. Context: Extract and analyze agent reasoning loops, step functions, and termination conditions. Use when needing to (1) understand how an agent framework implements reasoning (ReAct, Plan-and-Solve, Reflection, etc.), (2) locate the core decision-making logic, (3) analyze loop mechanics and termination conditions, (4) document the step-by-step execution flow of an agent, or (5) compare reasoning patterns across frameworks.
Example output
A structured workflow result with clearer steps, more consistent formatting, and an output that is easier to reuse in the next run.
When to use this skill
- Use this skill when you want a reusable workflow rather than writing the same prompt again and again.
When not to use this skill
- Do not use this when you only need a one-off answer and do not need a reusable workflow.
- Do not use it if you cannot install or maintain the related files, repository context, or supporting tools.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/control-loop-extraction/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How control-loop-extraction Compares
| Feature / Agent | control-loop-extraction | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | Unknown | N/A |
Frequently Asked Questions
What does this skill do?
Extract and analyze agent reasoning loops, step functions, and termination conditions. Use when needing to (1) understand how an agent framework implements reasoning (ReAct, Plan-and-Solve, Reflection, etc.), (2) locate the core decision-making logic, (3) analyze loop mechanics and termination conditions, (4) document the step-by-step execution flow of an agent, or (5) compare reasoning patterns across frameworks.
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
SKILL.md Source
# Control Loop Extraction
Extracts and documents the core agent reasoning loop from framework source code.
## Process
1. **Locate the loop** - Find the main agent execution loop
2. **Classify the pattern** - Identify ReAct, Plan-and-Solve, Reflection, or Tree-of-Thoughts
3. **Extract the step function** - Document the LLM → Parse → Decide flow
4. **Map termination** - Catalog all loop exit conditions
## Reasoning Pattern Identification
### Pattern Signatures
**ReAct (Reason + Act)**
```python
# Signature: Thought → Action → Observation cycle
while not done:
thought = llm.generate(prompt) # Reasoning
action = parse_action(thought) # Action selection
observation = execute(action) # Environment feedback
prompt = update_prompt(observation) # Loop continuation
```
**Plan-and-Solve**
```python
# Signature: Upfront planning, then execution
plan = llm.generate("Create a plan for...")
for step in plan.steps:
result = execute_step(step)
if needs_replan(result):
plan = replan(...)
```
**Reflection**
```python
# Signature: Act → Self-critique → Adjust
while not done:
action = llm.generate(prompt)
result = execute(action)
critique = llm.generate(f"Evaluate: {result}")
if critique.needs_adjustment:
prompt = adjust_approach(critique)
```
**Tree-of-Thoughts**
```python
# Signature: Branch → Evaluate → Select
thoughts = [generate_thought() for _ in range(n)]
scores = [evaluate(t) for t in thoughts]
best = select_best(thoughts, scores)
```
## Step Function Analysis
The "step function" is the atomic unit of agent execution. Extract:
1. **Input Assembly** - How context is constructed for the LLM
2. **LLM Invocation** - The actual model call
3. **Output Parsing** - How raw output becomes structured actions
4. **Action Dispatch** - Tool execution vs. final response routing
### Key Code Patterns
```python
# Common step function structure
def step(self, state):
# 1. Assemble input
messages = self._build_messages(state)
# 2. Call LLM
response = self.llm.invoke(messages)
# 3. Parse output
parsed = self._parse_response(response)
# 4. Dispatch
if parsed.is_tool_call:
return self._execute_tool(parsed.tool, parsed.args)
else:
return AgentFinish(parsed.final_answer)
```
## Termination Condition Catalog
### Common Termination Patterns
| Condition | Implementation | Risk |
|-----------|----------------|------|
| Step limit | `if step_count >= max_steps` | May cut off valid execution |
| Token limit | `if total_tokens >= max_tokens` | May truncate mid-thought |
| Explicit finish | `if action.type == "finish"` | Relies on LLM cooperation |
| Timeout | `if elapsed > timeout` | Wall-clock unpredictable |
| Loop detection | `if state in seen_states` | Requires state hashing |
| Error threshold | `if error_count >= max_errors` | May exit on recoverable errors |
### Anti-Pattern: No Termination Guard
```python
# DANGEROUS: No exit condition
while True:
result = agent.step()
if result.is_done: # What if LLM never outputs done?
break
```
**Fix:** Always include a step counter:
```python
for step in range(max_steps):
result = agent.step()
if result.is_done:
break
else:
logger.warning("Hit max steps limit")
```
## Output Template
```markdown
## Control Loop Analysis: [Framework Name]
### Reasoning Topology
- **Pattern**: [ReAct | Plan-and-Solve | Reflection | Tree-of-Thoughts | Hybrid]
- **Location**: `path/to/agent.py:L45-L120`
### Step Function
- **Input Assembly**: [Description of context building]
- **LLM Call**: [Method and parameters]
- **Parser**: [How output is structured]
- **Dispatch Logic**: [Tool vs Finish decision]
### Termination Conditions
1. [Condition 1 with code reference]
2. [Condition 2 with code reference]
3. ...
### Loop Detection
- **Method**: [Heuristic | State hash | None]
- **Implementation**: [Code reference or N/A]
```
## Integration Points
- **Prerequisite**: `codebase-mapping` to identify agent files
- **Feeds into**: `comparative-matrix` for pattern comparison
- **Feeds into**: `architecture-synthesis` for new loop designRelated Skills
security-requirement-extraction
Derive security requirements from threat models and business context. Use when translating threats into actionable requirements, creating security user stories, or building security test cases.
hig-components-controls
Apple HIG guidance for selection and input controls including pickers, toggles, sliders, steppers, segmented controls, combo boxes, text fields, text views, labels, token fields, virtual keyboards, rating indicators, and gauges.
stitch-loop
Teaches agents to iteratively build websites using Stitch with an autonomous baton-passing loop pattern
star-story-extraction
Auto-invoke after task completion to extract interview-ready STAR stories from completed work.
resume-bullet-extraction
Auto-invoke after task completion to generate powerful resume bullet points from completed work.
design-spec-extraction
Extract comprehensive JSON design specifications from visual sources including Figma exports, UI mockups, screenshots, or live website captures. Produces W3C DTCG-compliant output with component trees, suitable for code generation, design documentation, and developer handoff.
standards-extraction
Extract coding standards and conventions from CONTRIBUTING.md, .editorconfig, linter configs. Use for onboarding and ensuring consistent contributions.
creating-feedback-loops
Expert at creating continuous improvement feedback loops for Claude's responses. Use when establishing self-improvement processes, tracking progress over time, or implementing iterative refinement workflows.
azure-quotas
Check/manage Azure quotas and usage across providers. For deployment planning, capacity validation, region selection. WHEN: "check quotas", "service limits", "current usage", "request quota increase", "quota exceeded", "validate capacity", "regional availability", "provisioning limits", "vCPU limit", "how many vCPUs available in my subscription".
raindrop-io
Manage Raindrop.io bookmarks with AI assistance. Save and organize bookmarks, search your collection, manage reading lists, and organize research materials. Use when working with bookmarks, web research, reading lists, or when user mentions Raindrop.io.
zlibrary-to-notebooklm
自动从 Z-Library 下载书籍并上传到 Google NotebookLM。支持 PDF/EPUB 格式,自动转换,一键创建知识库。
discover-skills
当你发现当前可用的技能都不够合适(或用户明确要求你寻找技能)时使用。本技能会基于任务目标和约束,给出一份精简的候选技能清单,帮助你选出最适配当前任务的技能。