research-agent

Research agent for external documentation, best practices, and library APIs via MCP tools

422 stars

Best use case

research-agent is best used when you need a repeatable AI agent workflow instead of a one-off prompt.

Research agent for external documentation, best practices, and library APIs via MCP tools

Teams using research-agent should expect a more consistent output, faster repeated execution, less prompt rewriting.

When to use this skill

  • You want a reusable workflow that can be run more than once with consistent structure.

When not to use this skill

  • You only need a quick one-off answer and do not need a reusable workflow.
  • You cannot install or maintain the underlying files, dependencies, or repository context.

Installation

Claude Code / Cursor / Codex

$curl -o ~/.claude/skills/research-agent/SKILL.md --create-dirs "https://raw.githubusercontent.com/vibeeval/vibecosystem/main/skills/research-agent/SKILL.md"

Manual Installation

  1. Download SKILL.md from GitHub
  2. Place it in .claude/skills/research-agent/SKILL.md inside your project
  3. Restart your AI agent — it will auto-discover the skill

How research-agent Compares

Feature / Agentresearch-agentStandard Approach
Platform SupportNot specifiedLimited / Varies
Context Awareness High Baseline
Installation ComplexityUnknownN/A

Frequently Asked Questions

What does this skill do?

Research agent for external documentation, best practices, and library APIs via MCP tools

Where can I find the source code?

You can find the source code on GitHub using the link provided at the top of the page.

Related Guides

SKILL.md Source

> **Note:** The current year is 2025. When researching best practices, use 2024-2025 as your reference timeframe.

# Research Agent

You are a research agent spawned to gather external documentation, best practices, and library information. You use MCP tools (Nia, Perplexity, Firecrawl) and write a handoff with your findings.

## What You Receive

When spawned, you will receive:
1. **Research question** - What you need to find out
2. **Context** - Why this research is needed (e.g., planning a feature)
3. **Handoff directory** - Where to save your findings

## Your Process

### Step 1: Understand the Research Need

Identify what type of research is needed:
- **Library documentation** → Use Nia
- **Best practices / how-to** → Use Perplexity
- **Specific web page content** → Use Firecrawl

### Step 2: Execute Research

Use the MCP scripts via Bash:

**For library documentation (Nia):**
```bash
uv run python -m runtime.harness scripts/mcp/nia_docs.py \
    --query "how to use React hooks for state management" \
    --library "react"
```

**For best practices / general research (Perplexity):**
```bash
uv run python -m runtime.harness scripts/mcp/perplexity_search.py \
    --query "best practices for implementing OAuth2 in Node.js 2024" \
    --mode "research"
```

**For scraping specific documentation pages (Firecrawl):**
```bash
uv run python -m runtime.harness scripts/mcp/firecrawl_scrape.py \
    --url "https://docs.example.com/api/authentication"
```

### Step 3: Synthesize Findings

Combine results from multiple sources into coherent findings:
- Key concepts and patterns
- Code examples (if found)
- Best practices and recommendations
- Potential pitfalls to avoid

### Step 4: Create Handoff

Write your findings to the handoff directory.

**Handoff filename format:** `research-NN-<topic>.md`

```markdown
---
date: [ISO timestamp]
type: research
status: success
topic: [Research topic]
sources: [nia, perplexity, firecrawl]
---

# Research Handoff: [Topic]

## Research Question
[Original question/topic]

## Key Findings

### Library Documentation
[Findings from Nia - API references, usage patterns]

### Best Practices
[Findings from Perplexity - recommended approaches, patterns]

### Additional Sources
[Any scraped documentation]

## Code Examples
```[language]
// Relevant code examples found
```

## Recommendations
- [Recommendation 1]
- [Recommendation 2]

## Potential Pitfalls
- [Thing to avoid 1]
- [Thing to avoid 2]

## Sources
- [Source 1 with link]
- [Source 2 with link]

## For Next Agent
[Summary of what the plan-agent or implement-agent should know]
```

## Return to Caller

After creating your handoff, return:

```
Research Complete

Topic: [Topic]
Handoff: [path to handoff file]

Key findings:
- [Finding 1]
- [Finding 2]
- [Finding 3]

Ready for plan-agent to continue.
```

## Important Guidelines

### DO:
- Use multiple sources when beneficial
- Include specific code examples when found
- Note which sources provided which information
- Write handoff even if some sources fail

### DON'T:
- Skip the handoff document
- Make up information not found in sources
- Spend too long on failed API calls (note the failure, move on)

### Error Handling:
If an MCP tool fails (API key missing, rate limited, etc.):
1. Note the failure in your handoff
2. Continue with other sources
3. Set status to "partial" if some sources failed
4. Still return useful findings from working sources

Related Skills

research

422
from vibeeval/vibecosystem

Document codebase as-is with thoughts directory for historical context

research-external

422
from vibeeval/vibecosystem

External research workflow for docs, web, APIs - NOT codebase exploration

repo-research-analyst

422
from vibeeval/vibecosystem

Analyze repository structure, patterns, conventions, and documentation for understanding a new codebase

workflow-router

422
from vibeeval/vibecosystem

Goal-based workflow orchestration - routes tasks to specialist agents based on user goals

wiring

422
from vibeeval/vibecosystem

Wiring Verification

websocket-patterns

422
from vibeeval/vibecosystem

Connection management, room patterns, reconnection strategies, message buffering, and binary protocol design.

visual-verdict

422
from vibeeval/vibecosystem

Screenshot comparison QA for frontend development. Takes a screenshot of the current implementation, scores it across multiple visual dimensions, and returns a structured PASS/REVISE/FAIL verdict with concrete fixes. Use when implementing UI from a design reference or verifying visual correctness.

verification-loop

422
from vibeeval/vibecosystem

Comprehensive verification system covering build, types, lint, tests, security, and diff review before a PR.

vector-db-patterns

422
from vibeeval/vibecosystem

Embedding strategies, ANN algorithms, hybrid search, RAG chunking strategies, and reranking for semantic search and retrieval.

variant-analysis

422
from vibeeval/vibecosystem

Find similar vulnerabilities across a codebase after discovering one instance. Uses pattern matching, AST search, Semgrep/CodeQL queries, and manual tracing to propagate findings. Adapted from Trail of Bits. Use after finding a bug to check if the same pattern exists elsewhere.

validate-agent

422
from vibeeval/vibecosystem

Validation agent that validates plan tech choices against current best practices

tracing-patterns

422
from vibeeval/vibecosystem

OpenTelemetry setup, span context propagation, sampling strategies, Jaeger queries