when-chaining-agent-pipelines-use-stream-chain
Chain agent outputs as inputs in sequential or parallel pipelines for data flow orchestration
Best use case
when-chaining-agent-pipelines-use-stream-chain is best used when you need a repeatable AI agent workflow instead of a one-off prompt.
Chain agent outputs as inputs in sequential or parallel pipelines for data flow orchestration
Teams using when-chaining-agent-pipelines-use-stream-chain should expect a more consistent output, faster repeated execution, less prompt rewriting.
When to use this skill
- You want a reusable workflow that can be run more than once with consistent structure.
When not to use this skill
- You only need a quick one-off answer and do not need a reusable workflow.
- You cannot install or maintain the underlying files, dependencies, or repository context.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/when-chaining-agent-pipelines-use-stream-chain/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How when-chaining-agent-pipelines-use-stream-chain Compares
| Feature / Agent | when-chaining-agent-pipelines-use-stream-chain | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | Unknown | N/A |
Frequently Asked Questions
What does this skill do?
Chain agent outputs as inputs in sequential or parallel pipelines for data flow orchestration
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
SKILL.md Source
# Agent Pipeline Chaining SOP
## Overview
This skill implements agent pipeline chaining where outputs from one agent become inputs to the next, supporting both sequential and parallel execution patterns with streaming data flows.
## Agents & Responsibilities
### task-orchestrator
**Role:** Pipeline coordination and orchestration
**Responsibilities:**
- Design pipeline architecture
- Connect agent stages
- Monitor data flow
- Handle pipeline errors
### memory-coordinator
**Role:** Data flow and state management
**Responsibilities:**
- Store intermediate results
- Coordinate data passing
- Manage pipeline state
- Ensure data consistency
## Phase 1: Design Pipeline
### Objective
Design pipeline architecture with stages, data flows, and execution strategy.
### Scripts
```bash
# Design pipeline architecture
npx claude-flow@alpha pipeline design \
--stages "research,analyze,code,test,review" \
--flow sequential \
--output pipeline-design.json
# Define data flow
npx claude-flow@alpha pipeline dataflow \
--design pipeline-design.json \
--output dataflow-spec.json
# Visualize pipeline
npx claude-flow@alpha pipeline visualize \
--design pipeline-design.json \
--output pipeline-diagram.png
# Store design in memory
npx claude-flow@alpha memory store \
--key "pipeline/design" \
--file pipeline-design.json
```
### Pipeline Patterns
**Sequential Pipeline:**
```
Agent1 → Agent2 → Agent3 → Agent4
```
**Parallel Pipeline:**
```
┌─ Agent2 ─┐
Agent1 ├─ Agent3 ─┤ Agent5
└─ Agent4 ─┘
```
**Hybrid Pipeline:**
```
Agent1 → ┬─ Agent2 ─┐
└─ Agent3 ─┴─ Agent4 → Agent5
```
## Phase 2: Connect Agents
### Objective
Connect agents with proper data flow channels and state management.
### Scripts
```bash
# Initialize pipeline
npx claude-flow@alpha pipeline init \
--design pipeline-design.json
# Spawn pipeline agents
npx claude-flow@alpha agent spawn --type researcher --pipeline-stage 1
npx claude-flow@alpha agent spawn --type analyst --pipeline-stage 2
npx claude-flow@alpha agent spawn --type coder --pipeline-stage 3
npx claude-flow@alpha agent spawn --type tester --pipeline-stage 4
# Connect pipeline stages
npx claude-flow@alpha pipeline connect \
--from-stage 1 --to-stage 2 \
--data-channel "memory"
npx claude-flow@alpha pipeline connect \
--from-stage 2 --to-stage 3 \
--data-channel "stream"
# Verify connections
npx claude-flow@alpha pipeline status --show-connections
```
### Data Flow Mechanisms
**Memory-Based:**
```bash
# Agent 1 stores output
npx claude-flow@alpha memory store \
--key "pipeline/stage-1/output" \
--value "research findings..."
# Agent 2 retrieves input
npx claude-flow@alpha memory retrieve \
--key "pipeline/stage-1/output"
```
**Stream-Based:**
```bash
# Agent 1 streams output
npx claude-flow@alpha stream write \
--channel "stage-1-to-2" \
--data "streaming data..."
# Agent 2 consumes stream
npx claude-flow@alpha stream read \
--channel "stage-1-to-2"
```
## Phase 3: Execute Pipeline
### Objective
Execute pipeline with proper sequencing and data flow.
### Scripts
```bash
# Execute sequential pipeline
npx claude-flow@alpha pipeline execute \
--design pipeline-design.json \
--input initial-data.json \
--strategy sequential
# Execute parallel pipeline
npx claude-flow@alpha pipeline execute \
--design pipeline-design.json \
--input initial-data.json \
--strategy parallel \
--max-parallelism 3
# Monitor execution
npx claude-flow@alpha pipeline monitor --interval 5
# Track stage progress
npx claude-flow@alpha pipeline stages --show-progress
```
### Execution Strategies
**Sequential:**
- Stages execute one after another
- Output of stage N is input to stage N+1
- Simple error handling
- Predictable execution time
**Parallel:**
- Independent stages execute simultaneously
- Outputs merged at synchronization points
- Complex error handling
- Faster overall execution
**Adaptive:**
- Dynamically switches between sequential and parallel
- Based on stage dependencies and resource availability
- Optimizes for throughput
## Phase 4: Monitor Streaming
### Objective
Monitor data flow and pipeline execution in real-time.
### Scripts
```bash
# Monitor data flow
npx claude-flow@alpha stream monitor \
--all-channels \
--interval 2 \
--output stream-metrics.json
# Track stage throughput
npx claude-flow@alpha pipeline metrics \
--metric throughput \
--per-stage
# Monitor backpressure
npx claude-flow@alpha stream backpressure --detect
# Generate flow report
npx claude-flow@alpha pipeline report \
--include-timing \
--include-throughput \
--output pipeline-report.md
```
### Key Metrics
- **Stage Throughput:** Items processed per minute per stage
- **Pipeline Latency:** End-to-end processing time
- **Backpressure:** Queue buildup at stage boundaries
- **Error Rate:** Failures per stage
- **Resource Utilization:** CPU/memory per agent
## Phase 5: Validate Results
### Objective
Validate pipeline outputs and ensure data integrity.
### Scripts
```bash
# Collect pipeline results
npx claude-flow@alpha pipeline results \
--output pipeline-results.json
# Validate data integrity
npx claude-flow@alpha pipeline validate \
--results pipeline-results.json \
--schema validation-schema.json
# Compare with expected output
npx claude-flow@alpha pipeline compare \
--actual pipeline-results.json \
--expected expected-output.json
# Generate validation report
npx claude-flow@alpha pipeline report \
--type validation \
--output validation-report.md
```
## Success Criteria
- [ ] Pipeline design complete
- [ ] All stages connected
- [ ] Data flow functional
- [ ] Outputs validated
- [ ] Performance acceptable
### Performance Targets
- Stage latency: <30 seconds average
- Pipeline throughput: ≥10 items/minute
- Error rate: <2%
- Data integrity: 100%
## Best Practices
1. **Clear Stage Boundaries:** Each stage has single responsibility
2. **Data Validation:** Validate outputs before passing to next stage
3. **Error Handling:** Implement retry and fallback mechanisms
4. **Backpressure Management:** Prevent queue overflow
5. **Monitoring:** Track metrics continuously
6. **State Management:** Use memory coordination for state
7. **Testing:** Test each stage independently
8. **Documentation:** Document data schemas and flows
## Common Issues & Solutions
### Issue: Pipeline Stalls
**Symptoms:** Stages stop processing
**Solution:** Check for backpressure, increase buffer sizes
### Issue: Data Loss
**Symptoms:** Missing data in outputs
**Solution:** Implement acknowledgment mechanism, use reliable channels
### Issue: High Latency
**Symptoms:** Slow end-to-end processing
**Solution:** Identify bottleneck stage, add parallelism
## Integration Points
- **swarm-orchestration:** For complex multi-pipeline orchestration
- **advanced-swarm:** For optimized agent coordination
- **performance-analysis:** For bottleneck detection
## References
- Pipeline Design Patterns
- Stream Processing Theory
- Data Flow ArchitecturesRelated Skills
preprocessing-data-with-automated-pipelines
Process automate data cleaning, transformation, and validation for ML tasks. Use when requesting "preprocess data", "clean data", "ETL pipeline", or "data transformation". Trigger with relevant phrases based on skill purpose.
orchestrating-deployment-pipelines
Deploy use when you need to work with deployment and CI/CD. This skill provides deployment automation and orchestration with comprehensive guidance and automation. Trigger with phrases like "deploy application", "create pipeline", or "automate deployment".
monitoring-cross-chain-bridges
Monitor cross-chain bridge TVL, volume, fees, and transaction status across networks. Use when researching bridges, comparing routes, or tracking bridge transactions. Trigger with phrases like "monitor bridges", "compare bridge fees", "track bridge tx", "bridge TVL", or "cross-chain transfer status".
kafka-stream-processor
Kafka Stream Processor - Auto-activating skill for Data Pipelines. Triggers on: kafka stream processor, kafka stream processor Part of the Data Pipelines skill category.
exploring-blockchain-data
Process query and analyze blockchain data including blocks, transactions, and smart contracts. Use when querying blockchain data and transactions. Trigger with phrases like "explore blockchain", "query transactions", or "check on-chain data".
building-cicd-pipelines
Execute use when you need to work with deployment and CI/CD. This skill provides deployment automation and pipeline orchestration with comprehensive guidance and automation. Trigger with phrases like "deploy application", "create pipeline", or "automate deployment".
building-automl-pipelines
Build automated machine learning pipelines with feature engineering, model selection, and hyperparameter tuning. Use when automating ML workflows from data preparation through model deployment. Trigger with phrases like "build automl pipeline", "automate ml workflow", or "create automated training pipeline".
analyzing-on-chain-data
Process perform on-chain analysis including whale tracking, token flows, and network activity. Use when performing crypto analysis. Trigger with phrases like "analyze crypto", "check blockchain", or "monitor market".
stream-coding
Documentation-first development methodology. The goal is AI-ready documentation - when docs are clear enough, code generation becomes automatic. Triggers on "Build", "Create", "Implement", "Document", or "Spec out". Version 3.5 adds Phase 2.5 Adversarial Review and renames internal verification to Spec Gate (structural completeness). Clarity Gate is now a separate standalone tool for epistemic quality.
llm-application-dev-langchain-agent
You are an expert LangChain agent developer specializing in production-grade AI systems using LangChain 0.1+ and LangGraph.
blockchain-developer
Build production-ready Web3 applications, smart contracts, and decentralized systems. Implements DeFi protocols, NFT platforms, DAOs, and enterprise blockchain integrations. Use PROACTIVELY for smart contracts, Web3 apps, DeFi protocols, or blockchain infrastructure.
let-chains-advisor
Identifies deeply nested if-let expressions and suggests let chains for cleaner control flow. Activates when users write nested conditionals with pattern matching.