firecrawl-download
Download an entire website as local files — markdown, screenshots, or multiple formats per page. Use this skill when the user wants to save a site locally, download documentation for offline use, bulk-save pages as files, or says "download the site", "save as local files", "offline copy", "download all the docs", or "save for reference". Combines site mapping and scraping into organized local directories.
Best use case
firecrawl-download is best used when you need a repeatable AI agent workflow instead of a one-off prompt.
Download an entire website as local files — markdown, screenshots, or multiple formats per page. Use this skill when the user wants to save a site locally, download documentation for offline use, bulk-save pages as files, or says "download the site", "save as local files", "offline copy", "download all the docs", or "save for reference". Combines site mapping and scraping into organized local directories.
Teams using firecrawl-download should expect a more consistent output, faster repeated execution, less prompt rewriting.
When to use this skill
- You want a reusable workflow that can be run more than once with consistent structure.
When not to use this skill
- You only need a quick one-off answer and do not need a reusable workflow.
- You cannot install or maintain the underlying files, dependencies, or repository context.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/firecrawl-download/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How firecrawl-download Compares
| Feature / Agent | firecrawl-download | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | Unknown | N/A |
Frequently Asked Questions
What does this skill do?
Download an entire website as local files — markdown, screenshots, or multiple formats per page. Use this skill when the user wants to save a site locally, download documentation for offline use, bulk-save pages as files, or says "download the site", "save as local files", "offline copy", "download all the docs", or "save for reference". Combines site mapping and scraping into organized local directories.
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
SKILL.md Source
# firecrawl download > **Experimental.** Convenience command that combines `map` + `scrape` to save an entire site as local files. Maps the site first to discover pages, then scrapes each one into nested directories under `.firecrawl/`. All scrape options work with download. Always pass `-y` to skip the confirmation prompt. ## When to use - You want to save an entire site (or section) to local files - You need offline access to documentation or content - Bulk content extraction with organized file structure ## Quick start ```bash # Interactive wizard (picks format, screenshots, paths for you) firecrawl download https://docs.example.com # With screenshots firecrawl download https://docs.example.com --screenshot --limit 20 -y # Multiple formats (each saved as its own file per page) firecrawl download https://docs.example.com --format markdown,links --screenshot --limit 20 -y # Creates per page: index.md + links.txt + screenshot.png # Filter to specific sections firecrawl download https://docs.example.com --include-paths "/features,/sdks" # Skip translations firecrawl download https://docs.example.com --exclude-paths "/zh,/ja,/fr,/es,/pt-BR" # Full combo firecrawl download https://docs.example.com \ --include-paths "/features,/sdks" \ --exclude-paths "/zh,/ja" \ --only-main-content \ --screenshot \ -y ``` ## Download options | Option | Description | | ------------------------- | -------------------------------------------------------- | | `--limit <n>` | Max pages to download | | `--search <query>` | Filter URLs by search query | | `--include-paths <paths>` | Only download matching paths | | `--exclude-paths <paths>` | Skip matching paths | | `--allow-subdomains` | Include subdomain pages | | `-y` | Skip confirmation prompt (always use in automated flows) | ## Scrape options (all work with download) `-f <formats>`, `-H`, `-S`, `--screenshot`, `--full-page-screenshot`, `--only-main-content`, `--include-tags`, `--exclude-tags`, `--wait-for`, `--max-age`, `--country`, `--languages` ## See also - [firecrawl-map](../firecrawl-map/SKILL.md) — just discover URLs without downloading - [firecrawl-scrape](../firecrawl-scrape/SKILL.md) — scrape individual pages - [firecrawl-crawl](../firecrawl-crawl/SKILL.md) — bulk extract as JSON (not local files)
Related Skills
media-downloader
智能媒体下载器。根据用户描述自动搜索和下载图片、视频片段,支持视频自动剪辑。 Smart media downloader. Automatically search and download images/video clips based on user description, with auto-trimming support. 触发方式 Triggers: "下载图片", "找视频", "download media", "download images", "find video", "/media"
firecrawl-scraper
Deep web scraping, screenshots, PDF parsing, and website crawling using Firecrawl API
firecrawl-search
Web search with full page content extraction. Use this skill whenever the user asks to search the web, find articles, research a topic, look something up, find recent news, discover sources, or says "search for", "find me", "look up", "what are people saying about", or "find articles about". Returns real search results with optional full-page markdown — not just snippets. Provides capabilities beyond Claude's built-in WebSearch.
firecrawl-scrape
Extract clean markdown from any URL, including JavaScript-rendered SPAs. Use this skill whenever the user provides a URL and wants its content, says "scrape", "grab", "fetch", "pull", "get the page", "extract from this URL", or "read this webpage". Handles JS-rendered pages, multiple concurrent URLs, and returns LLM-optimized markdown. Use this instead of WebFetch for any webpage content extraction.
firecrawl-map
Discover and list all URLs on a website, with optional search filtering. Use this skill when the user wants to find a specific page on a large site, list all URLs, see the site structure, find where something is on a domain, or says "map the site", "find the URL for", "what pages are on", or "list all pages". Essential when the user knows which site but not which exact page.
firecrawl-crawl
Bulk extract content from an entire website or site section. Use this skill when the user wants to crawl a site, extract all pages from a docs section, bulk-scrape multiple pages following links, or says "crawl", "get all the pages", "extract everything under /docs", "bulk extract", or needs content from many pages on the same site. Handles depth limits, path filtering, and concurrent extraction.
firecrawl-agent
AI-powered autonomous data extraction that navigates complex sites and returns structured JSON. Use this skill when the user wants structured data from websites, needs to extract pricing tiers, product listings, directory entries, or any data as JSON with a schema. Triggers on "extract structured data", "get all the products", "pull pricing info", "extract as JSON", or when the user provides a JSON schema for website data. More powerful than simple scraping for multi-page structured extraction.
enact-firecrawl
Scrape, crawl, search, and extract structured data from websites using Firecrawl API - converts web pages to LLM-ready markdown
youtube-downloader
Download YouTube videos with customizable quality and format options. Use this skill when the user asks to download, save, or grab YouTube videos. Supports various quality settings (best, 1080p, 720p, 480p, 360p), multiple formats (mp4, webm, mkv), and audio-only downloads as MP3.
firecrawl
多功能网页抓取和数据提取工具,支持同步抓取、搜索、网站地图获取和异步爬取
firecrawl-web
Fetch web content, take screenshots, extract structured data, search the web, and crawl documentation sites. Use when the user needs current web information, asks to scrape a URL, wants a screenshot, needs to extract specific data from a page, or wants to learn about a framework or library.
video-downloader
Downloads videos from YouTube and other platforms for offline viewing, editing, or archival. Handles various formats and quality options.