Scrapling Web Fetch
当用户要获取网页内容、正文提取、把网页转成 markdown/text、抓取文章主体时,优先使用此技能。
Best use case
Scrapling Web Fetch is best used when you need a repeatable AI agent workflow instead of a one-off prompt.
当用户要获取网页内容、正文提取、把网页转成 markdown/text、抓取文章主体时,优先使用此技能。
Teams using Scrapling Web Fetch should expect a more consistent output, faster repeated execution, less prompt rewriting.
When to use this skill
- You want a reusable workflow that can be run more than once with consistent structure.
When not to use this skill
- You only need a quick one-off answer and do not need a reusable workflow.
- You cannot install or maintain the underlying files, dependencies, or repository context.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/clean-content-fetch/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How Scrapling Web Fetch Compares
| Feature / Agent | Scrapling Web Fetch | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | Unknown | N/A |
Frequently Asked Questions
What does this skill do?
当用户要获取网页内容、正文提取、把网页转成 markdown/text、抓取文章主体时,优先使用此技能。
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
SKILL.md Source
# Scrapling Web Fetch 当用户要获取网页内容、正文提取、把网页转成 markdown/text、抓取文章主体时,优先使用此技能。 ## 默认流程 1. 使用 `python3 scripts/scrapling_fetch.py <url> <max_chars>` 2. 默认正文选择器优先级: - `article` - `main` - `.post-content` - `[class*="body"]` 3. 命中正文后,使用 `html2text` 转 Markdown 4. 若都未命中,回退到 `body` 5. 最终按 `max_chars` 截断输出 ## 用法 ```bash python3 /Users/zzd/.openclaw/workspace/skills/scrapling-web-fetch/scripts/scrapling_fetch.py <url> 30000 ``` ## 依赖 优先检查: - `scrapling` - `html2text` - `curl_cffi` - `playwright` - `browserforge` 推荐使用独立虚拟环境,避免系统 Python 的 PEP 668 限制: ```bash python3 -m venv /Users/zzd/.openclaw/workspace/.venvs/clean-content-fetch /Users/zzd/.openclaw/workspace/.venvs/clean-content-fetch/bin/pip install scrapling html2text curl_cffi playwright browserforge /Users/zzd/.openclaw/workspace/.venvs/clean-content-fetch/bin/python -m playwright install chromium ``` 如直接运行脚本,优先使用该虚拟环境中的 Python: ```bash /Users/zzd/.openclaw/workspace/.venvs/clean-content-fetch/bin/python /Users/zzd/.openclaw/workspace/skills/scrapling-web-fetch/scripts/scrapling_fetch.py <url> 30000 ``` ## 输出约定 脚本默认输出 Markdown 正文内容。 如需结构化输出,可追加 `--json`。 如需调试提取命中了哪个 selector,可查看 stderr 输出。 ## 附加资源 - 用法参考:`/Users/zzd/.openclaw/workspace/skills/scrapling-web-fetch/references/usage.md` - 选择器策略:`/Users/zzd/.openclaw/workspace/skills/scrapling-web-fetch/references/selectors.md` - 统一入口:`/Users/zzd/.openclaw/workspace/skills/scrapling-web-fetch/scripts/fetch-web-content` ## 何时用这个技能 - 获取文章正文 - 抓博客/新闻/公告正文 - 将网页转成 Markdown 供后续总结 - 常规 fetch 效果差,希望提升现代网页抓取稳定性 - 抓小红书分享短链或笔记落地页正文 ## 小红书抓取方法 对于 `xhslink.com` 短链或小红书笔记页,推荐直接使用虚拟环境中的脚本运行: ```bash /Users/zzd/.openclaw/workspace/.venvs/clean-content-fetch/bin/python /Users/zzd/.openclaw/workspace/skills/scrapling-web-fetch/scripts/scrapling_fetch.py 'http://xhslink.com/o/9745hugimlD' 30000 ``` 说明: - 脚本会先解析短链并抓取落地页正文 - 适合提取小红书笔记文案、标题和主体内容 - 若页面需要更复杂交互,再切到浏览器自动化 ## 何时不用 - 需要完整浏览器交互、点击、登录、翻页时:改用浏览器自动化 - 只是简单获取 API JSON:直接请求 API 更合适
Related Skills
defold-examples-fetch
Fetches Defold code examples by topic. Use when looking for practical implementation patterns, sample code, or how to do something specific in Defold.
defold-docs-fetch
Fetches Defold manuals and documentation. Use when looking up how Defold features work, understanding concepts, components, workflows, platform setup, or needing guidance beyond API reference.
fetching-dbt-docs
Retrieves and searches dbt documentation pages in LLM-friendly markdown format. Use when fetching dbt documentation, looking up dbt features, or answering questions about dbt Cloud, dbt Core, or the dbt Semantic Layer.
scrapling-skill
Install, troubleshoot, and use Scrapling CLI to extract HTML, Markdown, or text from webpages. Use this skill whenever the user mentions Scrapling, `uv tool install scrapling`, `scrapling extract`, WeChat/mp.weixin articles, browser-backed page fetching, or needs help deciding between static and dynamic extraction.
native-data-fetching
Use when implementing or debugging ANY network request, API call, or data fetching. Covers fetch API, axios, React Query, SWR, error handling, caching strategies, offline support.
fetch-url
渲染网页 URL,去噪提取正文并输出为 Markdown(默认)或其他格式/原始 HTML,以减少 Token。
fetching-library-docs
Token-efficient library API documentation fetcher using Context7 MCP with 77% token savings. Fetches code examples, API references, and usage patterns for published libraries (React, Next.js, Prisma, etc). Use when users ask "how do I use X library", need code examples, want API syntax, or are learning a framework's official API. Triggers: "Show me React hooks", "Prisma query syntax", "Next.js routing API". NOT for exploring repo internals/source code (use researching-with-deepwiki) or local files.
web-fetch
Fetches web content with intelligent content extraction, converting HTML to clean markdown. Use for documentation, articles, and reference pages http/https URLs.
DuckDuckGo Search via web_fetch
Search the web using DuckDuckGo Lite's HTML interface, parsed via `web_fetch`. No API key or package install required.
nextjs-data-fetching
Fetch API, Caching, and Revalidation strategies. Use when fetching data, configuring cache behavior, or implementing revalidation in Next.js. (triggers: **/*.tsx, **/service.ts, fetch, revalidate, no-store, force-cache)
langsmith-fetch
Debug LangChain and LangGraph agents by fetching execution traces from LangSmith Studio. Use when debugging agent behavior, investigating errors, analyzing tool calls, checking memory operations, or examining agent performance. Automatically fetches recent traces and analyzes execution patterns. Requires langsmith-fetch CLI installed.
brandfetch-automation
Automate Brandfetch tasks via Rube MCP (Composio). Always search tools first for current schemas.