firecrawl-crawl
Bulk extract content from an entire website or site section. Use this skill when the user wants to crawl a site, extract all pages from a docs section, bulk-scrape multiple pages following links, or says "crawl", "get all the pages", "extract everything under /docs", "bulk extract", or needs content from many pages on the same site. Handles depth limits, path filtering, and concurrent extraction.
Best use case
firecrawl-crawl is best used when you need a repeatable AI agent workflow instead of a one-off prompt.
Bulk extract content from an entire website or site section. Use this skill when the user wants to crawl a site, extract all pages from a docs section, bulk-scrape multiple pages following links, or says "crawl", "get all the pages", "extract everything under /docs", "bulk extract", or needs content from many pages on the same site. Handles depth limits, path filtering, and concurrent extraction.
Teams using firecrawl-crawl should expect a more consistent output, faster repeated execution, less prompt rewriting.
When to use this skill
- You want a reusable workflow that can be run more than once with consistent structure.
When not to use this skill
- You only need a quick one-off answer and do not need a reusable workflow.
- You cannot install or maintain the underlying files, dependencies, or repository context.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/firecrawl-crawl/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How firecrawl-crawl Compares
| Feature / Agent | firecrawl-crawl | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | Unknown | N/A |
Frequently Asked Questions
What does this skill do?
Bulk extract content from an entire website or site section. Use this skill when the user wants to crawl a site, extract all pages from a docs section, bulk-scrape multiple pages following links, or says "crawl", "get all the pages", "extract everything under /docs", "bulk extract", or needs content from many pages on the same site. Handles depth limits, path filtering, and concurrent extraction.
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
SKILL.md Source
# firecrawl crawl Bulk extract content from a website. Crawls pages following links up to a depth/limit. ## When to use - You need content from many pages on a site (e.g., all `/docs/`) - You want to extract an entire site section - Step 4 in the [workflow escalation pattern](firecrawl-cli): search → scrape → map → **crawl** → interact ## Quick start ```bash # Crawl a docs section firecrawl crawl "<url>" --include-paths /docs --limit 50 --wait -o .firecrawl/crawl.json # Full crawl with depth limit firecrawl crawl "<url>" --max-depth 3 --wait --progress -o .firecrawl/crawl.json # Check status of a running crawl firecrawl crawl <job-id> ``` ## Options | Option | Description | | ------------------------- | ------------------------------------------- | | `--wait` | Wait for crawl to complete before returning | | `--progress` | Show progress while waiting | | `--limit <n>` | Max pages to crawl | | `--max-depth <n>` | Max link depth to follow | | `--include-paths <paths>` | Only crawl URLs matching these paths | | `--exclude-paths <paths>` | Skip URLs matching these paths | | `--delay <ms>` | Delay between requests | | `--max-concurrency <n>` | Max parallel crawl workers | | `--pretty` | Pretty print JSON output | | `-o, --output <path>` | Output file path | ## Tips - Always use `--wait` when you need the results immediately. Without it, crawl returns a job ID for async polling. - Use `--include-paths` to scope the crawl — don't crawl an entire site when you only need one section. - Crawl consumes credits per page. Check `firecrawl credit-usage` before large crawls. ## See also - [firecrawl-scrape](../firecrawl-scrape/SKILL.md) — scrape individual pages - [firecrawl-map](../firecrawl-map/SKILL.md) — discover URLs before deciding to crawl - [firecrawl-download](../firecrawl-download/SKILL.md) — download site to local files (uses map + scrape)
Related Skills
firecrawl-scraper
Deep web scraping, screenshots, PDF parsing, and website crawling using Firecrawl API
firecrawl-search
Web search with full page content extraction. Use this skill whenever the user asks to search the web, find articles, research a topic, look something up, find recent news, discover sources, or says "search for", "find me", "look up", "what are people saying about", or "find articles about". Returns real search results with optional full-page markdown — not just snippets. Provides capabilities beyond Claude's built-in WebSearch.
firecrawl-scrape
Extract clean markdown from any URL, including JavaScript-rendered SPAs. Use this skill whenever the user provides a URL and wants its content, says "scrape", "grab", "fetch", "pull", "get the page", "extract from this URL", or "read this webpage". Handles JS-rendered pages, multiple concurrent URLs, and returns LLM-optimized markdown. Use this instead of WebFetch for any webpage content extraction.
firecrawl-map
Discover and list all URLs on a website, with optional search filtering. Use this skill when the user wants to find a specific page on a large site, list all URLs, see the site structure, find where something is on a domain, or says "map the site", "find the URL for", "what pages are on", or "list all pages". Essential when the user knows which site but not which exact page.
firecrawl-download
Download an entire website as local files — markdown, screenshots, or multiple formats per page. Use this skill when the user wants to save a site locally, download documentation for offline use, bulk-save pages as files, or says "download the site", "save as local files", "offline copy", "download all the docs", or "save for reference". Combines site mapping and scraping into organized local directories.
firecrawl-agent
AI-powered autonomous data extraction that navigates complex sites and returns structured JSON. Use this skill when the user wants structured data from websites, needs to extract pricing tiers, product listings, directory entries, or any data as JSON with a schema. Triggers on "extract structured data", "get all the products", "pull pricing info", "extract as JSON", or when the user provides a JSON schema for website data. More powerful than simple scraping for multi-page structured extraction.
enact-firecrawl
Scrape, crawl, search, and extract structured data from websites using Firecrawl API - converts web pages to LLM-ready markdown
firecrawl
多功能网页抓取和数据提取工具,支持同步抓取、搜索、网站地图获取和异步爬取
crawl4ai
功能强大的开源网页抓取和数据处理工具,支持6种工作模式,包括截图、PDF导出和智能爬取
firecrawl-web
Fetch web content, take screenshots, extract structured data, search the web, and crawl documentation sites. Use when the user needs current web information, asks to scrape a URL, wants a screenshot, needs to extract specific data from a page, or wants to learn about a framework or library.
azure-quotas
Check/manage Azure quotas and usage across providers. For deployment planning, capacity validation, region selection. WHEN: "check quotas", "service limits", "current usage", "request quota increase", "quota exceeded", "validate capacity", "regional availability", "provisioning limits", "vCPU limit", "how many vCPUs available in my subscription".
raindrop-io
Manage Raindrop.io bookmarks with AI assistance. Save and organize bookmarks, search your collection, manage reading lists, and organize research materials. Use when working with bookmarks, web research, reading lists, or when user mentions Raindrop.io.