multiAI Summary Pending
deep-research
Run autonomous research tasks that plan, search, read, and synthesize information into comprehensive reports.
28,273 stars
bysickn33
Installation
Claude Code / Cursor / Codex
$curl -o ~/.claude/skills/deep-research/SKILL.md --create-dirs "https://raw.githubusercontent.com/sickn33/antigravity-awesome-skills/main/plugins/antigravity-awesome-skills-claude/skills/deep-research/SKILL.md"
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/deep-research/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How deep-research Compares
| Feature / Agent | deep-research | Standard Approach |
|---|---|---|
| Platform Support | multi | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | Unknown | N/A |
Frequently Asked Questions
What does this skill do?
Run autonomous research tasks that plan, search, read, and synthesize information into comprehensive reports.
Which AI agents support this skill?
This skill is compatible with multi.
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
SKILL.md Source
# Gemini Deep Research Skill Run autonomous research tasks that plan, search, read, and synthesize information into comprehensive reports. ## When to Use This Skill Use this skill when: - Performing market analysis - Conducting competitive landscaping - Creating literature reviews - Doing technical research - Performing due diligence - Need detailed, cited research reports ## Requirements - Python 3.8+ - httpx: `pip install -r requirements.txt` - GEMINI_API_KEY environment variable ## Setup 1. Get a Gemini API key from [Google AI Studio](https://aistudio.google.com/) 2. Set the environment variable: ```bash export GEMINI_API_KEY=your-api-key-here ``` Or create a `.env` file in the skill directory. ## Usage ### Start a research task ```bash python3 scripts/research.py --query "Research the history of Kubernetes" ``` ### With structured output format ```bash python3 scripts/research.py --query "Compare Python web frameworks" \ --format "1. Executive Summary\n2. Comparison Table\n3. Recommendations" ``` ### Stream progress in real-time ```bash python3 scripts/research.py --query "Analyze EV battery market" --stream ``` ### Start without waiting ```bash python3 scripts/research.py --query "Research topic" --no-wait ``` ### Check status of running research ```bash python3 scripts/research.py --status <interaction_id> ``` ### Wait for completion ```bash python3 scripts/research.py --wait <interaction_id> ``` ### Continue from previous research ```bash python3 scripts/research.py --query "Elaborate on point 2" --continue <interaction_id> ``` ### List recent research ```bash python3 scripts/research.py --list ``` ## Output Formats - **Default**: Human-readable markdown report - **JSON** (`--json`): Structured data for programmatic use - **Raw** (`--raw`): Unprocessed API response ## Cost & Time | Metric | Value | |--------|-------| | Time | 2-10 minutes per task | | Cost | $2-5 per task (varies by complexity) | | Token usage | ~250k-900k input, ~60k-80k output | ## Best Use Cases - Market analysis and competitive landscaping - Technical literature reviews - Due diligence research - Historical research and timelines - Comparative analysis (frameworks, products, technologies) ## Workflow 1. User requests research → Run `--query "..."` 2. Inform user of estimated time (2-10 minutes) 3. Monitor with `--stream` or poll with `--status` 4. Return formatted results 5. Use `--continue` for follow-up questions ## Exit Codes - **0**: Success - **1**: Error (API error, config issue, timeout) - **130**: Cancelled by user (Ctrl+C)