web-fetch
Fetches web content with intelligent content extraction, converting HTML to clean markdown. Use for documentation, articles, and reference pages http/https URLs.
Best use case
web-fetch is best used when you need a repeatable AI agent workflow instead of a one-off prompt.
Fetches web content with intelligent content extraction, converting HTML to clean markdown. Use for documentation, articles, and reference pages http/https URLs.
Teams using web-fetch should expect a more consistent output, faster repeated execution, less prompt rewriting.
When to use this skill
- You want a reusable workflow that can be run more than once with consistent structure.
When not to use this skill
- You only need a quick one-off answer and do not need a reusable workflow.
- You cannot install or maintain the underlying files, dependencies, or repository context.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/web-fetch/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How web-fetch Compares
| Feature / Agent | web-fetch | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | Unknown | N/A |
Frequently Asked Questions
What does this skill do?
Fetches web content with intelligent content extraction, converting HTML to clean markdown. Use for documentation, articles, and reference pages http/https URLs.
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
SKILL.md Source
# Web Content Fetching Fetch web content using `curl | html2markdown` with CSS selectors for clean, complete markdown output. ## Quick Usage (Known Sites) Use site-specific selectors for best results: ```bash # Anthropic docs curl -s "<url>" | html2markdown --include-selector "#content-container" # MDN Web Docs curl -s "<url>" | html2markdown --include-selector "article" # GitHub docs curl -s "<url>" | html2markdown --include-selector "article" --exclude-selector "nav,.sidebar" # Generic article pages curl -s "<url>" | html2markdown --include-selector "article,main,[role=main]" --exclude-selector "nav,header,footer" ``` ## Site Patterns | Site | Include Selector | Exclude Selector | |------|------------------|------------------| | platform.claude.com | `#content-container` | - | | docs.anthropic.com | `#content-container` | - | | developer.mozilla.org | `article` | - | | github.com (docs) | `article` | `nav,.sidebar` | | Generic | `article,main` | `nav,header,footer,script,style` | ## Universal Fallback (Unknown Sites) For sites without known patterns, use the Bun script which auto-detects content: ```bash bun ~/.claude/skills/web-fetch/fetch.ts "<url>" ``` ### Setup (one-time) ```bash cd ~/.claude/skills/web-fetch && bun install ``` ## Finding the Right Selector When a site isn't in the patterns list: ```bash # Check what content containers exist curl -s "<url>" | grep -o '<article[^>]*>\|<main[^>]*>\|id="[^"]*content[^"]*"' | head -10 # Test a selector curl -s "<url>" | html2markdown --include-selector "<selector>" | head -30 # Check line count curl -s "<url>" | html2markdown --include-selector "<selector>" | wc -l ``` ## Options Reference ```bash --include-selector "CSS" # Only include matching elements --exclude-selector "CSS" # Remove matching elements --domain "https://..." # Convert relative links to absolute ``` ## Comparison | Method | Anthropic Docs | Code Blocks | Complexity | |--------|----------------|-------------|------------| | Full page | 602 lines | Yes | Noisy | | `--include-selector "#content-container"` | 385 lines | Yes | Clean | | Bun script (universal) | 383 lines | Yes | Clean | ## Troubleshooting **Wrong content selected**: The site may have multiple articles. Inspect the HTML: ```bash curl -s "<url>" | grep -o '<article[^>]*>' ``` **Empty output**: The selector doesn't match. Try broader selectors like `main` or `body`. **Missing code blocks**: Check if the site uses non-standard code formatting. **Client-rendered content**: If HTML only has "Loading..." placeholders, the content is JS-rendered. Neither curl nor the Bun script can extract it; use browser-based tools.
Related Skills
defold-examples-fetch
Fetches Defold code examples by topic. Use when looking for practical implementation patterns, sample code, or how to do something specific in Defold.
defold-docs-fetch
Fetches Defold manuals and documentation. Use when looking up how Defold features work, understanding concepts, components, workflows, platform setup, or needing guidance beyond API reference.
fetching-dbt-docs
Retrieves and searches dbt documentation pages in LLM-friendly markdown format. Use when fetching dbt documentation, looking up dbt features, or answering questions about dbt Cloud, dbt Core, or the dbt Semantic Layer.
native-data-fetching
Use when implementing or debugging ANY network request, API call, or data fetching. Covers fetch API, axios, React Query, SWR, error handling, caching strategies, offline support.
fetch-url
渲染网页 URL,去噪提取正文并输出为 Markdown(默认)或其他格式/原始 HTML,以减少 Token。
fetching-library-docs
Token-efficient library API documentation fetcher using Context7 MCP with 77% token savings. Fetches code examples, API references, and usage patterns for published libraries (React, Next.js, Prisma, etc). Use when users ask "how do I use X library", need code examples, want API syntax, or are learning a framework's official API. Triggers: "Show me React hooks", "Prisma query syntax", "Next.js routing API". NOT for exploring repo internals/source code (use researching-with-deepwiki) or local files.
DuckDuckGo Search via web_fetch
Search the web using DuckDuckGo Lite's HTML interface, parsed via `web_fetch`. No API key or package install required.
Scrapling Web Fetch
当用户要获取网页内容、正文提取、把网页转成 markdown/text、抓取文章主体时,优先使用此技能。
nextjs-data-fetching
Fetch API, Caching, and Revalidation strategies. Use when fetching data, configuring cache behavior, or implementing revalidation in Next.js. (triggers: **/*.tsx, **/service.ts, fetch, revalidate, no-store, force-cache)
langsmith-fetch
Debug LangChain and LangGraph agents by fetching execution traces from LangSmith Studio. Use when debugging agent behavior, investigating errors, analyzing tool calls, checking memory operations, or examining agent performance. Automatically fetches recent traces and analyzes execution patterns. Requires langsmith-fetch CLI installed.
brandfetch-automation
Automate Brandfetch tasks via Rube MCP (Composio). Always search tools first for current schemas.
Daily Logs
Record the user's daily activities, progress, decisions, and learnings in a structured, chronological format.