research-ops
Evidence-first current-state research workflow for ECC. Use when the user wants fresh facts, comparisons, enrichment, or a recommendation built from current public evidence and any supplied local context.
Best use case
research-ops is best used when you need a repeatable AI agent workflow instead of a one-off prompt.
Evidence-first current-state research workflow for ECC. Use when the user wants fresh facts, comparisons, enrichment, or a recommendation built from current public evidence and any supplied local context.
Teams using research-ops should expect a more consistent output, faster repeated execution, less prompt rewriting.
When to use this skill
- You want a reusable workflow that can be run more than once with consistent structure.
When not to use this skill
- You only need a quick one-off answer and do not need a reusable workflow.
- You cannot install or maintain the underlying files, dependencies, or repository context.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/research-ops/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How research-ops Compares
| Feature / Agent | research-ops | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | Unknown | N/A |
Frequently Asked Questions
What does this skill do?
Evidence-first current-state research workflow for ECC. Use when the user wants fresh facts, comparisons, enrichment, or a recommendation built from current public evidence and any supplied local context.
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
Related Guides
Best AI Skills for Claude
Explore the best AI skills for Claude and Claude Code across coding, research, workflow automation, documentation, and agent operations.
ChatGPT vs Claude for Agent Skills
Compare ChatGPT and Claude for AI agent skills across coding, writing, research, and reusable workflow execution.
Top AI Agents for Productivity
See the top AI agent skills for productivity, workflow automation, operational systems, documentation, and everyday task execution.
SKILL.md Source
# Research Ops Use this when the user asks to research something current, compare options, enrich people or companies, or turn repeated lookups into a monitored workflow. This is the operator wrapper around the repo's research stack. It is not a replacement for `deep-research`, `exa-search`, or `market-research`; it tells you when and how to use them together. ## Skill Stack Pull these ECC-native skills into the workflow when relevant: - `exa-search` for fast current-web discovery - `deep-research` for multi-source synthesis with citations - `market-research` when the end result should be a recommendation or ranked decision - `lead-intelligence` when the task is people/company targeting instead of generic research - `knowledge-ops` when the result should be stored in durable context afterward ## When to Use - user says "research", "look up", "compare", "who should I talk to", or "what's the latest" - the answer depends on current public information - the user already supplied evidence and wants it factored into a fresh recommendation - the task may be recurring enough that it should become a monitor instead of a one-off lookup ## Guardrails - do not answer current questions from stale memory when fresh search is cheap - separate: - sourced fact - user-provided evidence - inference - recommendation - do not spin up a heavyweight research pass if the answer is already in local code or docs ## Workflow ### 1. Start from what the user already gave you Normalize any supplied material into: - already-evidenced facts - needs verification - open questions Do not restart the analysis from zero if the user already built part of the model. ### 2. Classify the ask Choose the right lane before searching: - quick factual answer - comparison or decision memo - lead/enrichment pass - recurring monitoring candidate ### 3. Take the lightest useful evidence path first - use `exa-search` for fast discovery - escalate to `deep-research` when synthesis or multiple sources matter - use `market-research` when the outcome should end in a recommendation - hand off to `lead-intelligence` when the real ask is target ranking or warm-path discovery ### 4. Report with explicit evidence boundaries For important claims, say whether they are: - sourced facts - user-supplied context - inference - recommendation Freshness-sensitive answers should include concrete dates. ### 5. Decide whether the task should stay manual If the user is likely to ask the same research question repeatedly, say so explicitly and recommend a monitoring or workflow layer instead of repeating the same manual search forever. ## Output Format ```text QUESTION TYPE - factual / comparison / enrichment / monitoring EVIDENCE - sourced facts - user-provided context INFERENCE - what follows from the evidence RECOMMENDATION - answer or next move - whether this should become a monitor ``` ## Pitfalls - do not mix inference into sourced facts without labeling it - do not ignore user-provided evidence - do not use a heavy research lane for a question local repo context can answer - do not give freshness-sensitive answers without dates ## Verification - important claims are labeled by evidence type - freshness-sensitive outputs include dates - the final recommendation matches the actual research mode used
Related Skills
market-research
Conduct market research, competitive analysis, investor due diligence, and industry intelligence with source attribution and decision-oriented summaries. Use when the user wants market sizing, competitor comparisons, fund research, technology scans, or research that informs business decisions.
deep-research
Multi-source deep research using firecrawl and exa MCPs. Searches the web, synthesizes findings, and delivers cited reports with source attribution. Use when the user wants thorough research on any topic with evidence and citations.
workspace-surface-audit
Audit the active repo, MCP servers, plugins, connectors, env surfaces, and harness setup, then recommend the highest-value ECC-native skills, hooks, agents, and operator workflows. Use when the user wants help setting up Claude Code or understanding what capabilities are actually available in their environment.
ui-demo
Record polished UI demo videos using Playwright. Use when the user asks to create a demo, walkthrough, screen recording, or tutorial video of a web application. Produces WebM videos with visible cursor, natural pacing, and professional feel.
token-budget-advisor
Offers the user an informed choice about how much response depth to consume before answering. Use this skill when the user explicitly wants to control response length, depth, or token budget. TRIGGER when: "token budget", "token count", "token usage", "token limit", "response length", "answer depth", "short version", "brief answer", "detailed answer", "exhaustive answer", "respuesta corta vs larga", "cuántos tokens", "ahorrar tokens", "responde al 50%", "dame la versión corta", "quiero controlar cuánto usas", or clear variants where the user is explicitly asking to control answer size or depth. DO NOT TRIGGER when: user has already specified a level in the current session (maintain it), the request is clearly a one-word answer, or "token" refers to auth/session/payment tokens rather than response size.
skill-comply
Visualize whether skills, rules, and agent definitions are actually followed — auto-generates scenarios at 3 prompt strictness levels, runs agents, classifies behavioral sequences, and reports compliance rates with full tool call timelines
santa-method
Multi-agent adversarial verification with convergence loop. Two independent review agents must both pass before output ships.
safety-guard
Use this skill to prevent destructive operations when working on production systems or running agents autonomously.
repo-scan
Cross-stack source code asset audit — classifies every file, detects embedded third-party libraries, and delivers actionable four-level verdicts per module with interactive HTML reports.
project-flow-ops
Operate execution flow across GitHub and Linear by triaging issues and pull requests, linking active work, and keeping GitHub public-facing while Linear remains the internal execution layer. Use when the user wants backlog control, PR triage, or GitHub-to-Linear coordination.
product-lens
Use this skill to validate the "why" before building, run product diagnostics, and pressure-test product direction before the request becomes an implementation contract.
openclaw-persona-forge
为 OpenClaw AI Agent 锻造完整的龙虾灵魂方案。根据用户偏好或随机抽卡, 输出身份定位、灵魂描述(SOUL.md)、角色化底线规则、名字和头像生图提示词。 如当前环境提供已审核的生图 skill,可自动生成统一风格头像图片。 当用户需要创建、设计或定制 OpenClaw 龙虾灵魂时使用。 不适用于:微调已有 SOUL.md、非 OpenClaw 平台的角色设计、纯工具型无性格 Agent。 触发词:龙虾灵魂、虾魂、OpenClaw 灵魂、养虾灵魂、龙虾角色、龙虾定位、 龙虾剧本杀角色、龙虾游戏角色、龙虾 NPC、龙虾性格、龙虾背景故事、 lobster soul、lobster character、抽卡、随机龙虾、龙虾 SOUL、gacha。