execution-engine-analysis

Analyze control flow, concurrency models, and event architectures in agent frameworks. Use when (1) understanding async vs sync execution patterns, (2) classifying execution topology (DAG/FSM/Linear), (3) mapping event emission and observability hooks, (4) evaluating scalability characteristics, or (5) comparing execution models across frameworks.

25 stars

Best use case

execution-engine-analysis is best used when you need a repeatable AI agent workflow instead of a one-off prompt.

Analyze control flow, concurrency models, and event architectures in agent frameworks. Use when (1) understanding async vs sync execution patterns, (2) classifying execution topology (DAG/FSM/Linear), (3) mapping event emission and observability hooks, (4) evaluating scalability characteristics, or (5) comparing execution models across frameworks.

Teams using execution-engine-analysis should expect a more consistent output, faster repeated execution, less prompt rewriting.

When to use this skill

  • You want a reusable workflow that can be run more than once with consistent structure.

When not to use this skill

  • You only need a quick one-off answer and do not need a reusable workflow.
  • You cannot install or maintain the underlying files, dependencies, or repository context.

Installation

Claude Code / Cursor / Codex

$curl -o ~/.claude/skills/execution-engine-analysis/SKILL.md --create-dirs "https://raw.githubusercontent.com/ComeOnOliver/skillshub/main/skills/aiskillstore/marketplace/dowwie/execution-engine-analysis/SKILL.md"

Manual Installation

  1. Download SKILL.md from GitHub
  2. Place it in .claude/skills/execution-engine-analysis/SKILL.md inside your project
  3. Restart your AI agent — it will auto-discover the skill

How execution-engine-analysis Compares

Feature / Agentexecution-engine-analysisStandard Approach
Platform SupportNot specifiedLimited / Varies
Context Awareness High Baseline
Installation ComplexityUnknownN/A

Frequently Asked Questions

What does this skill do?

Analyze control flow, concurrency models, and event architectures in agent frameworks. Use when (1) understanding async vs sync execution patterns, (2) classifying execution topology (DAG/FSM/Linear), (3) mapping event emission and observability hooks, (4) evaluating scalability characteristics, or (5) comparing execution models across frameworks.

Where can I find the source code?

You can find the source code on GitHub using the link provided at the top of the page.

SKILL.md Source

# Execution Engine Analysis

Analyzes the control flow substrate and concurrency model.

## Process

1. **Identify async model** — Native async, sync-with-wrappers, or hybrid
2. **Classify topology** — DAG, FSM, or linear chain
3. **Catalog events** — Callbacks, listeners, generators
4. **Map observability** — Pre/post hooks, interception points

## Concurrency Model Classification

### Native Async

```python
# Signature: async/await throughout
async def run(self):
    result = await self.llm.agenerate(messages)
    return await self.process(result)

# Entry point uses asyncio
asyncio.run(agent.run())
```

**Indicators**: `async def`, `await`, `asyncio.gather`, `aiohttp`

### Sync with Wrappers

```python
# Signature: sync API wrapping async internals
def run(self):
    return asyncio.run(self._async_run())

# Or using thread pools
def run(self):
    with ThreadPoolExecutor() as pool:
        future = pool.submit(self._blocking_call)
        return future.result()
```

**Indicators**: `asyncio.run()` inside sync methods, `ThreadPoolExecutor`, `run_in_executor`

### Hybrid

```python
# Both sync and async APIs exposed
def invoke(self, input):
    return self._sync_invoke(input)

async def ainvoke(self, input):
    return await self._async_invoke(input)
```

**Indicators**: Paired methods (`invoke`/`ainvoke`), `sync_to_async` decorators

## Execution Topology

### DAG (Directed Acyclic Graph)

```python
# Signature: Nodes with dependencies
class Node:
    def __init__(self, deps: list[Node]): ...

graph.add_edge(node_a, node_b)
result = graph.execute()  # Topological order
```

**Indicators**: `Graph`, `Node`, `Edge` classes, `networkx`, topological sort

### FSM (Finite State Machine)

```python
# Signature: Explicit states and transitions
class State(Enum):
    THINKING = "thinking"
    ACTING = "acting"
    DONE = "done"

def transition(self, current: State, event: str) -> State:
    if current == State.THINKING and event == "action_chosen":
        return State.ACTING
```

**Indicators**: State enums, transition tables, `current_state`, state machine libraries

### Linear Chain

```python
# Signature: Sequential step execution
def run(self):
    result = self.step1()
    result = self.step2(result)
    result = self.step3(result)
    return result

# Or pipeline pattern
chain = step1 | step2 | step3
result = chain.invoke(input)
```

**Indicators**: Sequential calls, pipe operators (`|`), `Chain`, `Pipeline` classes

## Event Architecture

### Callbacks

```python
class Callbacks:
    def on_llm_start(self, prompt): ...
    def on_llm_end(self, response): ...
    def on_tool_start(self, tool, input): ...
    def on_tool_end(self, output): ...
    def on_error(self, error): ...
```

**Flexibility**: Low — fixed hook points
**Traceability**: Medium — easy to follow

### Event Listeners/Emitters

```python
emitter = EventEmitter()
emitter.on('llm:start', handler)
emitter.on('tool:*', wildcard_handler)
emitter.emit('llm:start', {'prompt': prompt})
```

**Flexibility**: High — dynamic registration
**Traceability**: Low — harder to trace

### Async Generators (Streaming)

```python
async def run(self):
    async for chunk in self.llm.astream(prompt):
        yield {"type": "token", "content": chunk}
    yield {"type": "done"}
```

**Flexibility**: Medium — streaming-native
**Traceability**: High — follows data flow

## Observability Hooks Inventory

| Hook Point | Purpose | Interception Level |
|------------|---------|-------------------|
| Pre-LLM | Modify prompt | Input |
| Post-LLM | Access raw response | Output |
| Pre-Tool | Validate tool input | Input |
| Post-Tool | Transform tool output | Output |
| Pre-Step | Observe state | Read-only |
| Post-Step | Modify next step | Control flow |
| On-Error | Handle/transform | Error |

### Questions to Answer

- Can you intercept tool input before execution?
- Is the raw LLM response accessible (with token counts)?
- Can you modify control flow from hooks?
- Are hooks sync or async?

## Output Template

```markdown
## Execution Engine Analysis: [Framework Name]

### Concurrency Model
- **Type**: [Native Async / Sync-with-Wrappers / Hybrid]
- **Entry Point**: `path/to/main.py:run()`
- **Thread Safety**: [Yes/No/Partial]

### Execution Topology
- **Model**: [DAG / FSM / Linear Chain]
- **Implementation**: [Description with code refs]
- **Parallelization**: [Supported/Not Supported]

### Event Architecture
- **Pattern**: [Callbacks / Listeners / Generators]
- **Registration**: [Static / Dynamic]
- **Streaming**: [Supported / Not Supported]

### Observability Inventory

| Hook | Location | Async | Modifiable |
|------|----------|-------|------------|
| on_llm_start | callbacks.py:L23 | Yes | Input only |
| on_tool_end | callbacks.py:L45 | Yes | Output |
| ... | ... | ... | ... |

### Scalability Assessment
- **Blocking Operations**: [List any]
- **Resource Limits**: [Token counters, rate limits]
- **Recommended Concurrency**: [Threads/Processes/AsyncIO]
```

## Integration

- **Prerequisite**: `codebase-mapping` to identify execution files
- **Feeds into**: `comparative-matrix` for async decisions
- **Related**: `control-loop-extraction` for agent-specific flow

Related Skills

adk-engineer

25
from ComeOnOliver/skillshub

Execute software engineer specializing in creating production-ready ADK agents with best practices, code structure, testing, and deployment automation. Use when asked to "build ADK agent", "create agent code", or "engineer ADK application". Trigger with relevant phrases based on skill purpose.

project-workflow-analysis-blueprint-generator

25
from ComeOnOliver/skillshub

Comprehensive technology-agnostic prompt generator for documenting end-to-end application workflows. Automatically detects project architecture patterns, technology stacks, and data flow patterns to generate detailed implementation blueprints covering entry points, service layers, data access, error handling, and testing approaches across multiple technologies including .NET, Java/Spring, React, and microservices architectures.

game-engine

25
from ComeOnOliver/skillshub

Expert skill for building web-based game engines and games using HTML5, Canvas, WebGL, and JavaScript. Use when asked to create games, build game engines, implement game physics, handle collision detection, set up game loops, manage sprites, add game controls, or work with 2D/3D rendering. Covers techniques for platformers, breakout-style games, maze games, tilemaps, audio, multiplayer via WebRTC, and publishing games.

datanalysis-credit-risk

25
from ComeOnOliver/skillshub

Credit risk data cleaning and variable screening pipeline for pre-loan modeling. Use when working with raw credit data that needs quality assessment, missing value analysis, or variable selection before modeling. it covers data loading and formatting, abnormal period filtering, missing rate calculation, high-missing variable removal,low-IV variable filtering, high-PSI variable removal, Null Importance denoising, high-correlation variable removal, and cleaning report generation. Applicable scenarios arecredit risk data cleaning, variable screening, pre-loan modeling preprocessing.

Line Execution Checker

25
from ComeOnOliver/skillshub

Check if specific lines were executed using gcov data

Coverage Analysis

25
from ComeOnOliver/skillshub

Coverage analysis is essential for understanding which parts of your code are exercised during fuzzing. It helps identify fuzzing blockers like magic value checks and tracks the effectiveness of harness improvements over time.

ROS 2 Engineering Skills

25
from ComeOnOliver/skillshub

A progressive-disclosure skill for ROS 2 development — from first workspace to

using-dbt-for-analytics-engineering

25
from ComeOnOliver/skillshub

Builds and modifies dbt models, writes SQL transformations using ref() and source(), creates tests, and validates results with dbt show. Use when doing any dbt work - building or modifying models, debugging errors, exploring unfamiliar data sources, writing tests, or evaluating impact of changes.

product-analysis

25
from ComeOnOliver/skillshub

Multi-path parallel product analysis with cross-model test-time compute scaling. Spawns parallel agents (Claude Code agent teams + Codex CLI) to explore product from multiple perspectives, then synthesizes findings into actionable optimization plans. Can invoke competitors-analysis for competitive benchmarking. Use when "product audit", "self-review", "发布前审查", "产品分析", "analyze our product", "UX audit", or "信息架构审计".

competitors-analysis

25
from ComeOnOliver/skillshub

Analyze competitor repositories with evidence-based approach. Use when tracking competitors, creating competitor profiles, or generating competitive analysis. CRITICAL - all analysis must be based on actual cloned code, never assumptions. Triggers include "analyze competitor", "add competitor", "competitive analysis", or "竞品分析".

image-analysis

25
from ComeOnOliver/skillshub

图片分析与识别,可分析本地图片、网络图片、视频、文件。适用于 OCR、物体识别、场景理解等。当用户发送图片或要求分析图片时必须使用此技能。

oss-code-analysis

25
from ComeOnOliver/skillshub

Explore open-source GitHub repository source trees via web browsing to analyze and compare feature implementations at the code level. Supports two modes: cross-project comparison and single-project deep dive. Use when evaluating how OSS projects implement a specific feature, choosing architecture patterns, or benchmarking implementation strategies.