swarm-intelligence
Build swarm intelligence systems where multiple AI agents collaborate to make predictions and solve complex problems. Use when: implementing ensemble AI predictions, building consensus-based decision systems, creating multi-agent prediction markets.
Best use case
swarm-intelligence is best used when you need a repeatable AI agent workflow instead of a one-off prompt.
Build swarm intelligence systems where multiple AI agents collaborate to make predictions and solve complex problems. Use when: implementing ensemble AI predictions, building consensus-based decision systems, creating multi-agent prediction markets.
Teams using swarm-intelligence should expect a more consistent output, faster repeated execution, less prompt rewriting.
When to use this skill
- You want a reusable workflow that can be run more than once with consistent structure.
When not to use this skill
- You only need a quick one-off answer and do not need a reusable workflow.
- You cannot install or maintain the underlying files, dependencies, or repository context.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/swarm-intelligence/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How swarm-intelligence Compares
| Feature / Agent | swarm-intelligence | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | Unknown | N/A |
Frequently Asked Questions
What does this skill do?
Build swarm intelligence systems where multiple AI agents collaborate to make predictions and solve complex problems. Use when: implementing ensemble AI predictions, building consensus-based decision systems, creating multi-agent prediction markets.
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
SKILL.md Source
# Swarm Intelligence
## Overview
Build systems where multiple AI agents independently analyze a problem, then converge on predictions through voting, debate, or weighted aggregation. Inspired by biological swarms and ensemble methods — the collective intelligence of diverse agents consistently outperforms any single agent.
### Core Patterns
1. **Prediction Swarm (Vote & Aggregate)** — Each agent analyzes independently with a different persona, then votes are aggregated
2. **Debate Swarm (Argue & Converge)** — Agents see each other's reasoning and update positions over multiple rounds
3. **Specialist Swarm (Divide & Conquer)** — Each agent handles a different domain aspect, then a synthesizer combines results
## Instructions
When a user asks to build a swarm intelligence system, prediction ensemble, or multi-agent decision system:
1. **Identify the pattern** — Is it prediction (vote), debate (converge), or specialist (divide)?
2. **Define agents** — Each agent needs a unique persona/perspective and clear role
3. **Choose aggregation** — Weighted voting, median, debate rounds, or synthesis
4. **Implement with LangGraph** — Use parallel nodes for agents, then aggregation node
### Prediction Swarm Implementation
```python
"""Prediction swarm: N agents vote independently, aggregator combines."""
import json, operator
from typing import Annotated, TypedDict
from langchain_openai import ChatOpenAI
from langgraph.graph import StateGraph, END
class SwarmState(TypedDict):
question: str
predictions: Annotated[list[dict], operator.add]
final_answer: str
AGENT_PERSONAS = [
{"name": "Optimist", "prompt": "You see opportunities and upside potential."},
{"name": "Skeptic", "prompt": "You question assumptions and look for flaws."},
{"name": "Analyst", "prompt": "You focus on data and historical patterns."},
{"name": "Contrarian", "prompt": "You challenge consensus. Look for what others miss."},
{"name": "Pragmatist", "prompt": "You focus on practical, real-world constraints."},
]
def make_agent_node(persona: dict):
llm = ChatOpenAI(model="gpt-4o", temperature=0.7)
def agent_fn(state: SwarmState) -> dict:
response = llm.invoke(
f"You are the {persona['name']}. {persona['prompt']}\n\n"
f"Question: {state['question']}\n\n"
f"Respond with JSON: {{\"prediction\": \"...\", \"confidence\": 0.0-1.0, \"reasoning\": \"...\"}}"
)
prediction = json.loads(response.content)
prediction["agent"] = persona["name"]
return {"predictions": [prediction]}
return agent_fn
def aggregator(state: SwarmState) -> dict:
predictions = state["predictions"]
votes: dict[str, float] = {}
reasoning_parts = []
for p in predictions:
votes[p["prediction"]] = votes.get(p["prediction"], 0) + p["confidence"]
reasoning_parts.append(f"- {p['agent']} ({p['confidence']:.0%}): {p['reasoning']}")
winner = max(votes, key=votes.get)
avg_conf = sum(p["confidence"] for p in predictions) / len(predictions)
return {"final_answer": f"**Prediction:** {winner}\n**Confidence:** {avg_conf:.0%}\n**Breakdown:**\n" + "\n".join(reasoning_parts)}
# Build the graph
builder = StateGraph(SwarmState)
for persona in AGENT_PERSONAS:
builder.add_node(persona["name"], make_agent_node(persona))
builder.add_edge("__start__", persona["name"])
builder.add_node("aggregator", aggregator)
for persona in AGENT_PERSONAS:
builder.add_edge(persona["name"], "aggregator")
builder.add_edge("aggregator", END)
swarm = builder.compile()
```
### Debate Swarm (Multi-Round Convergence)
```python
"""Debate swarm: agents see each other's reasoning and update positions."""
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(model="gpt-4o", temperature=0.5)
DEBATE_AGENTS = [
{"name": "Bull", "bias": "optimistic"},
{"name": "Bear", "bias": "pessimistic"},
{"name": "Quant", "bias": "data-driven"},
]
def run_debate(question: str, rounds: int = 3) -> dict:
history = []
for round_num in range(1, rounds + 1):
round_responses = []
for agent in DEBATE_AGENTS:
context = ""
if history:
context = "Previous positions:\n" + "\n".join(
f"- {r['agent']}: {r['position']} (conf: {r['confidence']})" for r in history[-1]
)
response = llm.invoke(
f"You are {agent['name']}, a {agent['bias']} analyst.\n"
f"Question: {question}\nRound {round_num}/{rounds}.\n{context}\n\n"
f"State your position, confidence (0-1), and reasoning."
)
round_responses.append({"agent": agent["name"], "position": response.content[:200], "confidence": 0.7, "full": response.content})
history.append(round_responses)
final = llm.invoke(
f"Question: {question}\n\nFinal positions after {rounds} rounds:\n"
+ "\n".join(f"- {r['agent']}: {r['full']}" for r in history[-1])
+ "\n\nSynthesize a consensus answer."
)
return {"rounds": history, "consensus": final.content}
```
### Aggregation Strategies
| Strategy | Best For | How It Works |
|----------|----------|--------------|
| **Majority Vote** | Binary/categorical predictions | Most common answer wins |
| **Weighted Vote** | Varying agent confidence | Weight by confidence scores |
| **Median** | Numerical predictions | Take the median value |
| **Debate** | Complex reasoning | Multiple rounds of argumentation |
| **Synthesis** | Open-ended analysis | LLM combines all perspectives |
## Examples
### Example 1: Market Trend Prediction
```python
result = swarm.invoke({"question": "Will AI agents replace 50% of SaaS tools by 2027?"})
print(result["final_answer"])
# Output:
# **Prediction:** Unlikely within that timeframe
# **Confidence:** 68%
# **Breakdown:**
# - Optimist (85%): AI agents will automate many workflows but full replacement takes longer
# - Skeptic (40%): Enterprise adoption is slow, regulatory hurdles remain
# - Analyst (65%): Historical tech adoption curves suggest 2029-2030
# - Contrarian (70%): The question is wrong — agents will augment, not replace
# - Pragmatist (55%): Integration complexity means gradual transition
```
### Example 2: Multi-Domain Business Analysis
```python
SPECIALISTS = {
"market": "Analyze market size, competition, and demand signals.",
"technical": "Assess technical feasibility and architecture risks.",
"financial": "Model costs, revenue potential, and break-even timeline.",
"legal": "Identify regulatory risks and compliance needs.",
}
def specialist_swarm(question: str) -> str:
analyses = {}
for domain, prompt in SPECIALISTS.items():
response = llm.invoke(f"You are a {domain} specialist. {prompt}\n\nQuestion: {question}")
analyses[domain] = response.content
synthesis = llm.invoke(
f"Specialist analyses for: {question}\n\n"
+ "\n\n".join(f"**{k.upper()}:**\n{v}" for k, v in analyses.items())
+ "\n\nSynthesize into a unified recommendation."
)
return synthesis.content
# Usage: specialist_swarm("Should we build a competitor to Notion using AI-native architecture?")
```
## Guidelines
1. **Diversity is key** — Agents with identical prompts add noise, not intelligence. Give each a distinct perspective
2. **Odd number of agents** — Avoids ties in voting (5, 7, or 9 agents work best)
3. **Confidence calibration** — Ask agents to self-report confidence; use it for weighting
4. **Cost control** — Parallel calls are fast but expensive. Use cheaper models for screening, expensive for synthesis
5. **Sweet spot is 5-7 agents** — Beyond 9, gains plateau due to diminishing returns
6. **Temperature variation** — Use different temperatures per agent (0.3 for analytical, 0.9 for creative)
7. **Use swarms for high-stakes decisions** — For simple tasks, a single agent is faster and cheaper
## Dependencies
```bash
pip install langgraph langchain-openai # Python
npm install openai # Node.js
```Related Skills
review-swarm
Parallel read-only multi-agent code review of git diffs. Use when: reviewing diffs for regressions, security risks, performance issues, or wanting a parallel review swarm.
bug-hunt-swarm
Parallel read-only multi-agent root-cause investigation for bugs and regressions. Use when: investigating bugs, finding root causes, tracing regressions, or diagnosing failures with multi-agent swarm.
agent-swarm-orchestration
Coordinate multiple AI agents working together on complex tasks — routing, handoffs, consensus, memory sharing, and quality gates. Use when tasks involve building multi-agent systems, coordinating specialist agents in a pipeline, implementing agent-to-agent communication, designing swarm architectures, setting up agent orchestration frameworks, or building autonomous agent teams with supervision and quality control. Covers hierarchical, mesh, and pipeline topologies.
zustand
You are an expert in Zustand, the small, fast, and scalable state management library for React. You help developers manage global state without boilerplate using Zustand's hook-based stores, selectors for performance, middleware (persist, devtools, immer), computed values, and async actions — replacing Redux complexity with a simple, un-opinionated API in under 1KB.
zoho
Integrate and automate Zoho products. Use when a user asks to work with Zoho CRM, Zoho Books, Zoho Desk, Zoho Projects, Zoho Mail, or Zoho Creator, build custom integrations via Zoho APIs, automate workflows with Deluge scripting, sync data between Zoho apps and external systems, manage leads and deals, automate invoicing, build custom Zoho Creator apps, set up webhooks, or manage Zoho organization settings. Covers Zoho CRM, Books, Desk, Projects, Creator, and cross-product integrations.
zod
You are an expert in Zod, the TypeScript-first schema declaration and validation library. You help developers define schemas that validate data at runtime AND infer TypeScript types at compile time — eliminating the need to write types and validators separately. Used for API input validation, form validation, environment variables, config files, and any data boundary.
zipkin
Deploy and configure Zipkin for distributed tracing and request flow visualization. Use when a user needs to set up trace collection, instrument Java/Spring or other services with Zipkin, analyze service dependencies, or configure storage backends for trace data.
zig
Expert guidance for Zig, the systems programming language focused on performance, safety, and readability. Helps developers write high-performance code with compile-time evaluation, seamless C interop, no hidden control flow, and no garbage collector. Zig is used for game engines, operating systems, networking, and as a C/C++ replacement.
zed
Expert guidance for Zed, the high-performance code editor built in Rust with native collaboration, AI integration, and GPU-accelerated rendering. Helps developers configure Zed, create custom extensions, set up collaborative editing sessions, and integrate AI assistants for productive coding.
zeabur
Expert guidance for Zeabur, the cloud deployment platform that auto-detects frameworks, builds and deploys applications with zero configuration, and provides managed services like databases and message queues. Helps developers deploy full-stack applications with automatic scaling and one-click marketplace services.
zapier
Automate workflows between apps with Zapier. Use when a user asks to connect apps without code, automate repetitive tasks, sync data between services, or build no-code integrations between SaaS tools.
zabbix
Configure Zabbix for enterprise infrastructure monitoring with templates, triggers, discovery rules, and dashboards. Use when a user needs to set up Zabbix server, configure host monitoring, create custom templates, define trigger expressions, or automate host discovery and registration.