web-scraping-automation

自动化爬取网站数据和 API 接口。当用户需要抓取网页内容、调用 API、解析数据或创建爬虫脚本时使用此技能。

33 stars

Best use case

web-scraping-automation is best used when you need a repeatable AI agent workflow instead of a one-off prompt.

自动化爬取网站数据和 API 接口。当用户需要抓取网页内容、调用 API、解析数据或创建爬虫脚本时使用此技能。

Teams using web-scraping-automation should expect a more consistent output, faster repeated execution, less prompt rewriting.

When to use this skill

  • You want a reusable workflow that can be run more than once with consistent structure.

When not to use this skill

  • You only need a quick one-off answer and do not need a reusable workflow.
  • You cannot install or maintain the underlying files, dependencies, or repository context.

Installation

Claude Code / Cursor / Codex

$curl -o ~/.claude/skills/web-scraping-automation/SKILL.md --create-dirs "https://raw.githubusercontent.com/aAAaqwq/AGI-Super-Team/main/skills/web-scraping-automation/SKILL.md"

Manual Installation

  1. Download SKILL.md from GitHub
  2. Place it in .claude/skills/web-scraping-automation/SKILL.md inside your project
  3. Restart your AI agent — it will auto-discover the skill

How web-scraping-automation Compares

Feature / Agentweb-scraping-automationStandard Approach
Platform SupportNot specifiedLimited / Varies
Context Awareness High Baseline
Installation ComplexityUnknownN/A

Frequently Asked Questions

What does this skill do?

自动化爬取网站数据和 API 接口。当用户需要抓取网页内容、调用 API、解析数据或创建爬虫脚本时使用此技能。

Where can I find the source code?

You can find the source code on GitHub using the link provided at the top of the page.

SKILL.md Source

# 网站爬取与 API 自动化

## 功能说明
此技能专门用于自动化网站数据爬取和 API 接口调用,包括:
- 分析和爬取网站结构
- 调用和测试 REST/GraphQL API
- 创建自动化爬虫脚本
- 数据解析和清洗
- 处理反爬虫机制
- 定时任务和数据存储

## 使用场景
- "爬取这个网站的产品信息"
- "帮我调用这个 API 并解析返回数据"
- "创建一个脚本定时抓取新闻"
- "分析这个网站的 API 接口文档"
- "绕过这个网站的反爬虫限制"

## 技术栈

### ⚠️ 资源清理原则(强制)

**所有涉及浏览器的爬取任务完成后,必须自动关闭 Chrome/Selenium 进程!**

```python
# Playwright 示例
from playwright.sync_api import sync_playwright

def scrape_website():
    with sync_playwright() as p:
        browser = p.chromium.launch(headless=True)
        page = browser.new_page()
        # ... 爬取逻辑 ...
        browser.close()

    # ⚠️ 强制清理残留进程
    import subprocess
    subprocess.run(['pkill', '-f', 'chrome'], capture_output=True)

# Selenium 示例
from selenium import webdriver

driver = webdriver.Chrome()
try:
    # ... 爬取逻辑 ...
    pass
finally:
    driver.quit()
    # ⚠️ 确保清理
    import subprocess
    subprocess.run(['pkill', '-f', 'chrome'], capture_output=True)
```

**原因**: 避免内存泄漏和资源占用,防止 Gateway CPU 100% 过载

### Python 爬虫
- **requests**:HTTP 请求库
- **BeautifulSoup4**:HTML 解析
- **Scrapy**:专业爬虫框架
- **Selenium**:浏览器自动化
- **Playwright**:现代浏览器自动化

### JavaScript 爬虫
- **axios**:HTTP 客户端
- **cheerio**:服务端 jQuery
- **puppeteer**:Chrome 自动化
- **node-fetch**:Fetch API

## 工作流程
1. **目标分析**:
   - 检查网站结构和数据位置
   - 分析 API 接口和认证方式
   - 评估反爬虫机制

2. **方案设计**:
   - 选择合适的技术栈
   - 设计数据提取策略
   - 规划错误处理和重试机制

3. **脚本开发**:
   - 编写爬虫代码
   - 实现数据解析逻辑
   - 添加日志和监控

4. **测试优化**:
   - 验证数据准确性
   - 优化性能和稳定性
   - 处理边界情况

## 最佳实践
- 遵守 robots.txt 规则
- 设置合理的请求间隔
- 使用 User-Agent 和请求头
- 实现错误重试机制
- 数据去重和验证
- 使用代理池(如需要)
- 保存原始数据和日志

## 常见场景示例

### 1. 简单网页爬取
```python
import requests
from bs4 import BeautifulSoup

def scrape_website(url):
    headers = {'User-Agent': 'Mozilla/5.0'}
    response = requests.get(url, headers=headers)
    soup = BeautifulSoup(response.text, 'html.parser')

    # 提取数据
    data = []
    for item in soup.select('.product'):
        data.append({
            'title': item.select_one('.title').text,
            'price': item.select_one('.price').text
        })
    return data
```

### 2. API 调用
```python
import requests

def call_api(endpoint, params=None):
    headers = {
        'Authorization': 'Bearer YOUR_TOKEN',
        'Content-Type': 'application/json'
    }
    response = requests.get(endpoint, headers=headers, params=params)
    return response.json()
```

### 3. 动态网页爬取
```python
from selenium import webdriver
from selenium.webdriver.common.by import By

def scrape_dynamic_page(url):
    driver = webdriver.Chrome()
    driver.get(url)

    # 等待页面加载
    driver.implicitly_wait(10)

    # 提取数据
    elements = driver.find_elements(By.CLASS_NAME, 'item')
    data = [elem.text for elem in elements]

    driver.quit()
    return data
```

## 反爬虫应对策略
- **请求头伪装**:模拟真实浏览器
- **代理轮换**:使用代理池
- **验证码处理**:OCR 或第三方服务
- **Cookie 管理**:维护会话状态
- **请求频率控制**:避免触发限制
- **JavaScript 渲染**:使用 Selenium/Playwright

## 数据存储方案
- **CSV/Excel**:简单数据导出
- **JSON**:结构化数据存储
- **数据库**:MySQL、PostgreSQL、MongoDB
- **云存储**:S3、OSS
- **数据仓库**:用于大规模数据分析

Related Skills

zoom-automation

33
from aAAaqwq/AGI-Super-Team

Automate Zoom meeting creation, management, recordings, webinars, and participant tracking via Rube MCP (Composio). Always search tools first for current schemas.

zoho-crm-automation

33
from aAAaqwq/AGI-Super-Team

Automate Zoho CRM tasks via Rube MCP (Composio): create/update records, search contacts, manage leads, and convert leads. Always search tools first for current schemas.

zendesk-automation

33
from aAAaqwq/AGI-Super-Team

Automate Zendesk tasks via Rube MCP (Composio): tickets, users, organizations, replies. Always search tools first for current schemas.

youtube-automation

33
from aAAaqwq/AGI-Super-Team

Automate YouTube tasks via Rube MCP (Composio): upload videos, manage playlists, search content, get analytics, and handle comments. Always search tools first for current schemas.

workflow-automation

33
from aAAaqwq/AGI-Super-Team

Workflow automation is the infrastructure that makes AI agents reliable. Without durable execution, a network hiccup during a 10-step payment flow means lost money and angry customers. With it, workflows resume exactly where they left off. This skill covers the platforms (n8n, Temporal, Inngest) and patterns (sequential, parallel, orchestrator-worker) that turn brittle scripts into production-grade automation. Key insight: The platforms make different tradeoffs. n8n optimizes for accessibility

whatsapp-automation

33
from aAAaqwq/AGI-Super-Team

Automate WhatsApp Business tasks via Rube MCP (Composio): send messages, manage templates, upload media, and handle contacts. Always search tools first for current schemas.

wecom-cs-automation

33
from aAAaqwq/AGI-Super-Team

企业微信客服自动化系统。自动同意好友添加、基于知识库的智能问答、未知问题人工介入提醒。适用于企业微信客服场景的 AI 助手机器人。

wecom-automation

33
from aAAaqwq/AGI-Super-Team

企业微信个人账号直连自动化。基于 Wechaty 框架实现企业微信消息接收、自动同意好友、知识库问答、人工介入提醒。适用于企业微信个人机器人和自动化助手场景。

webflow-automation

33
from aAAaqwq/AGI-Super-Team

Automate Webflow CMS collections, site publishing, page management, asset uploads, and ecommerce orders via Rube MCP (Composio). Always search tools first for current schemas.

vercel-automation

33
from aAAaqwq/AGI-Super-Team

Automate Vercel tasks via Rube MCP (Composio): manage deployments, domains, DNS, env vars, projects, and teams. Always search tools first for current schemas.

twitter-automation

33
from aAAaqwq/AGI-Super-Team

Automate Twitter/X tasks via Rube MCP (Composio): posts, search, users, bookmarks, lists, media. Always search tools first for current schemas.

trello-automation

33
from aAAaqwq/AGI-Super-Team

Automate Trello boards, cards, and workflows via Rube MCP (Composio). Create cards, manage lists, assign members, and search across boards programmatically.