execution-engine-analysis

Analyze control flow, concurrency models, and event architectures in agent frameworks. Use when (1) understanding async vs sync execution patterns, (2) classifying execution topology (DAG/FSM/Linear), (3) mapping event emission and observability hooks, (4) evaluating scalability characteristics, or (5) comparing execution models across frameworks.

242 stars

Best use case

execution-engine-analysis is best used when you need a repeatable AI agent workflow instead of a one-off prompt. It is especially useful for teams working in multi. Analyze control flow, concurrency models, and event architectures in agent frameworks. Use when (1) understanding async vs sync execution patterns, (2) classifying execution topology (DAG/FSM/Linear), (3) mapping event emission and observability hooks, (4) evaluating scalability characteristics, or (5) comparing execution models across frameworks.

Analyze control flow, concurrency models, and event architectures in agent frameworks. Use when (1) understanding async vs sync execution patterns, (2) classifying execution topology (DAG/FSM/Linear), (3) mapping event emission and observability hooks, (4) evaluating scalability characteristics, or (5) comparing execution models across frameworks.

Users should expect a more consistent workflow output, faster repeated execution, and less time spent rewriting prompts from scratch.

Practical example

Example input

Use the "execution-engine-analysis" skill to help with this workflow task. Context: Analyze control flow, concurrency models, and event architectures in agent frameworks. Use when (1) understanding async vs sync execution patterns, (2) classifying execution topology (DAG/FSM/Linear), (3) mapping event emission and observability hooks, (4) evaluating scalability characteristics, or (5) comparing execution models across frameworks.

Example output

A structured workflow result with clearer steps, more consistent formatting, and an output that is easier to reuse in the next run.

When to use this skill

  • Use this skill when you want a reusable workflow rather than writing the same prompt again and again.

When not to use this skill

  • Do not use this when you only need a one-off answer and do not need a reusable workflow.
  • Do not use it if you cannot install or maintain the related files, repository context, or supporting tools.

Installation

Claude Code / Cursor / Codex

$curl -o ~/.claude/skills/execution-engine-analysis/SKILL.md --create-dirs "https://raw.githubusercontent.com/aiskillstore/marketplace/main/skills/dowwie/execution-engine-analysis/SKILL.md"

Manual Installation

  1. Download SKILL.md from GitHub
  2. Place it in .claude/skills/execution-engine-analysis/SKILL.md inside your project
  3. Restart your AI agent — it will auto-discover the skill

How execution-engine-analysis Compares

Feature / Agentexecution-engine-analysisStandard Approach
Platform SupportNot specifiedLimited / Varies
Context Awareness High Baseline
Installation ComplexityUnknownN/A

Frequently Asked Questions

What does this skill do?

Analyze control flow, concurrency models, and event architectures in agent frameworks. Use when (1) understanding async vs sync execution patterns, (2) classifying execution topology (DAG/FSM/Linear), (3) mapping event emission and observability hooks, (4) evaluating scalability characteristics, or (5) comparing execution models across frameworks.

Where can I find the source code?

You can find the source code on GitHub using the link provided at the top of the page.

Related Guides

SKILL.md Source

# Execution Engine Analysis

Analyzes the control flow substrate and concurrency model.

## Process

1. **Identify async model** — Native async, sync-with-wrappers, or hybrid
2. **Classify topology** — DAG, FSM, or linear chain
3. **Catalog events** — Callbacks, listeners, generators
4. **Map observability** — Pre/post hooks, interception points

## Concurrency Model Classification

### Native Async

```python
# Signature: async/await throughout
async def run(self):
    result = await self.llm.agenerate(messages)
    return await self.process(result)

# Entry point uses asyncio
asyncio.run(agent.run())
```

**Indicators**: `async def`, `await`, `asyncio.gather`, `aiohttp`

### Sync with Wrappers

```python
# Signature: sync API wrapping async internals
def run(self):
    return asyncio.run(self._async_run())

# Or using thread pools
def run(self):
    with ThreadPoolExecutor() as pool:
        future = pool.submit(self._blocking_call)
        return future.result()
```

**Indicators**: `asyncio.run()` inside sync methods, `ThreadPoolExecutor`, `run_in_executor`

### Hybrid

```python
# Both sync and async APIs exposed
def invoke(self, input):
    return self._sync_invoke(input)

async def ainvoke(self, input):
    return await self._async_invoke(input)
```

**Indicators**: Paired methods (`invoke`/`ainvoke`), `sync_to_async` decorators

## Execution Topology

### DAG (Directed Acyclic Graph)

```python
# Signature: Nodes with dependencies
class Node:
    def __init__(self, deps: list[Node]): ...

graph.add_edge(node_a, node_b)
result = graph.execute()  # Topological order
```

**Indicators**: `Graph`, `Node`, `Edge` classes, `networkx`, topological sort

### FSM (Finite State Machine)

```python
# Signature: Explicit states and transitions
class State(Enum):
    THINKING = "thinking"
    ACTING = "acting"
    DONE = "done"

def transition(self, current: State, event: str) -> State:
    if current == State.THINKING and event == "action_chosen":
        return State.ACTING
```

**Indicators**: State enums, transition tables, `current_state`, state machine libraries

### Linear Chain

```python
# Signature: Sequential step execution
def run(self):
    result = self.step1()
    result = self.step2(result)
    result = self.step3(result)
    return result

# Or pipeline pattern
chain = step1 | step2 | step3
result = chain.invoke(input)
```

**Indicators**: Sequential calls, pipe operators (`|`), `Chain`, `Pipeline` classes

## Event Architecture

### Callbacks

```python
class Callbacks:
    def on_llm_start(self, prompt): ...
    def on_llm_end(self, response): ...
    def on_tool_start(self, tool, input): ...
    def on_tool_end(self, output): ...
    def on_error(self, error): ...
```

**Flexibility**: Low — fixed hook points
**Traceability**: Medium — easy to follow

### Event Listeners/Emitters

```python
emitter = EventEmitter()
emitter.on('llm:start', handler)
emitter.on('tool:*', wildcard_handler)
emitter.emit('llm:start', {'prompt': prompt})
```

**Flexibility**: High — dynamic registration
**Traceability**: Low — harder to trace

### Async Generators (Streaming)

```python
async def run(self):
    async for chunk in self.llm.astream(prompt):
        yield {"type": "token", "content": chunk}
    yield {"type": "done"}
```

**Flexibility**: Medium — streaming-native
**Traceability**: High — follows data flow

## Observability Hooks Inventory

| Hook Point | Purpose | Interception Level |
|------------|---------|-------------------|
| Pre-LLM | Modify prompt | Input |
| Post-LLM | Access raw response | Output |
| Pre-Tool | Validate tool input | Input |
| Post-Tool | Transform tool output | Output |
| Pre-Step | Observe state | Read-only |
| Post-Step | Modify next step | Control flow |
| On-Error | Handle/transform | Error |

### Questions to Answer

- Can you intercept tool input before execution?
- Is the raw LLM response accessible (with token counts)?
- Can you modify control flow from hooks?
- Are hooks sync or async?

## Output Template

```markdown
## Execution Engine Analysis: [Framework Name]

### Concurrency Model
- **Type**: [Native Async / Sync-with-Wrappers / Hybrid]
- **Entry Point**: `path/to/main.py:run()`
- **Thread Safety**: [Yes/No/Partial]

### Execution Topology
- **Model**: [DAG / FSM / Linear Chain]
- **Implementation**: [Description with code refs]
- **Parallelization**: [Supported/Not Supported]

### Event Architecture
- **Pattern**: [Callbacks / Listeners / Generators]
- **Registration**: [Static / Dynamic]
- **Streaming**: [Supported / Not Supported]

### Observability Inventory

| Hook | Location | Async | Modifiable |
|------|----------|-------|------------|
| on_llm_start | callbacks.py:L23 | Yes | Input only |
| on_tool_end | callbacks.py:L45 | Yes | Output |
| ... | ... | ... | ... |

### Scalability Assessment
- **Blocking Operations**: [List any]
- **Resource Limits**: [Token counters, rate limits]
- **Recommended Concurrency**: [Threads/Processes/AsyncIO]
```

## Integration

- **Prerequisite**: `codebase-mapping` to identify execution files
- **Feeds into**: `comparative-matrix` for async decisions
- **Related**: `control-loop-extraction` for agent-specific flow

Related Skills

log-analysis

242
from aiskillstore/marketplace

Analyze application logs to identify errors, performance issues, and security anomalies. Use when debugging issues, monitoring system health, or investigating incidents. Handles various log formats including Apache, Nginx, application logs, and JSON logs.

wireshark-network-traffic-analysis

242
from aiskillstore/marketplace

This skill should be used when the user asks to "analyze network traffic with Wireshark", "capture packets for troubleshooting", "filter PCAP files", "follow TCP/UDP streams", "detect network anomalies", "investigate suspicious traffic", or "perform protocol analysis". It provides comprehensive techniques for network packet capture, filtering, and analysis using Wireshark.

wireshark-analysis

242
from aiskillstore/marketplace

This skill should be used when the user asks to "analyze network traffic with Wireshark", "capture packets for troubleshooting", "filter PCAP files", "follow TCP/UDP streams", "dete...

voice-ai-engine-development

242
from aiskillstore/marketplace

Build real-time conversational AI voice engines using async worker pipelines, streaming transcription, LLM agents, and TTS synthesis with interrupt handling and multi-provider support

vector-database-engineer

242
from aiskillstore/marketplace

Expert in vector databases, embedding strategies, and semantic search implementation. Masters Pinecone, Weaviate, Qdrant, Milvus, and pgvector for RAG applications, recommendation systems, and similar

unreal-engine-cpp-pro

242
from aiskillstore/marketplace

Expert guide for Unreal Engine 5.x C++ development, covering UObject hygiene, performance patterns, and best practices.

tutorial-engineer

242
from aiskillstore/marketplace

Creates step-by-step tutorials and educational content from code. Transforms complex concepts into progressive learning experiences with hands-on examples. Use PROACTIVELY for onboarding guides, feature tutorials, or concept explanations.

team-composition-analysis

242
from aiskillstore/marketplace

This skill should be used when the user asks to "plan team structure", "determine hiring needs", "design org chart", "calculate compensation", "plan equity allocation", or requests organizational design and headcount planning for a startup.

stride-analysis-patterns

242
from aiskillstore/marketplace

Apply STRIDE methodology to systematically identify threats. Use when analyzing system security, conducting threat modeling sessions, or creating security documentation.

reverse-engineer

242
from aiskillstore/marketplace

Expert reverse engineer specializing in binary analysis, disassembly, decompilation, and software analysis. Masters IDA Pro, Ghidra, radare2, x64dbg, and modern RE toolchains. Handles executable analysis, library inspection, protocol extraction, and vulnerability research. Use PROACTIVELY for binary analysis, CTF challenges, security research, or understanding undocumented software.

research-engineer

242
from aiskillstore/marketplace

An uncompromising Academic Research Engineer. Operates with absolute scientific rigor, objective criticism, and zero flair. Focuses on theoretical correctness, formal verification, and optimal implementation across any required technology.

rag-engineer

242
from aiskillstore/marketplace

Expert in building Retrieval-Augmented Generation systems. Masters embedding models, vector databases, chunking strategies, and retrieval optimization for LLM applications. Use when: building RAG, vector search, embeddings, semantic search, document retrieval.