letterboxd-watchlist
Scrape a public Letterboxd user's watchlist into a CSV/JSONL list of titles and film URLs without logging in. Use when a user asks to export, scrape, or mirror a Letterboxd watchlist, or to build watch-next queues.
About this skill
The 'letterboxd-watchlist' skill enables AI agents to extract information from public Letterboxd watchlists. It uses a robust Python script to scrape paginated watchlist pages, collecting film titles and their corresponding Letterboxd URLs. The output can be formatted as either a CSV file with `title,link` columns or a JSONL file, where each line is a JSON object containing `title` and `link`. This skill is particularly useful for users who wish to export their personal watchlist for backup purposes, mirror another public watchlist, or build custom 'watch-next' queues outside of the Letterboxd platform. It's designed to be polite, with a default crawl delay, and includes error handling like retries and timeouts to ensure reliable data extraction. Users would employ this skill when they need to programmatically access and process Letterboxd watchlist data without needing to log in, facilitating integration with other tools or services that can consume CSV or JSONL data.
Best use case
The primary use case is to export or mirror a public Letterboxd watchlist, transforming the web content into structured data formats like CSV or JSONL. This benefits users who want to backup their watchlist, analyze their watch-next queue, or integrate their film preferences with other applications or personal databases without manual data entry or API keys.
Scrape a public Letterboxd user's watchlist into a CSV/JSONL list of titles and film URLs without logging in. Use when a user asks to export, scrape, or mirror a Letterboxd watchlist, or to build watch-next queues.
A CSV or JSONL file containing a list of film titles and their respective Letterboxd URLs from the specified public user's watchlist.
Practical example
Example input
Can you please scrape the public Letterboxd watchlist for the user 'filmnerd2023' and save it as a CSV file?
Example output
Okay, I am scraping the public Letterboxd watchlist for 'filmnerd2023'. The results will be saved to `filmnerd2023_watchlist.csv`. Here's a preview: ```csv title,link Parasite (2019),https://letterboxd.com/film/parasite/ Dune (2021),https://letterboxd.com/film/dune-2021/ The Matrix (1999),https://letterboxd.com/film/the-matrix/ ```
When to use this skill
- When a user asks to export their public Letterboxd watchlist.
- To build a personalized 'watch-next' queue from a Letterboxd profile.
- If a user wants to mirror or backup a public Letterboxd watchlist.
- To programmatically obtain film titles and URLs from Letterboxd without needing login access.
When not to use this skill
- To access private Letterboxd watchlists or user data that requires authentication.
- For tasks unrelated to scraping public Letterboxd watchlists (e.g., managing a local film library).
- If the user needs to interact with Letterboxd in ways other than data extraction (e.g., adding films, rating movies).
- To scan local file systems or perform any actions beyond the explicit scope of watchlist scraping.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/letterboxd-watchlist/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How letterboxd-watchlist Compares
| Feature / Agent | letterboxd-watchlist | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | easy | N/A |
Frequently Asked Questions
What does this skill do?
Scrape a public Letterboxd user's watchlist into a CSV/JSONL list of titles and film URLs without logging in. Use when a user asks to export, scrape, or mirror a Letterboxd watchlist, or to build watch-next queues.
How difficult is it to install?
The installation complexity is rated as easy. You can find the installation instructions above.
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
Related Guides
Best AI Skills for Claude
Explore the best AI skills for Claude and Claude Code across coding, research, workflow automation, documentation, and agent operations.
AI Agent for YouTube Script Writing
Find AI agent skills for YouTube script writing, video research, content outlining, and repeatable channel production workflows.
AI Agents for Startups
Explore AI agent skills for startup validation, product research, growth experiments, documentation, and fast execution with small teams.
SKILL.md Source
# Letterboxd Watchlist Scraper
Use the bundled script to scrape a **public** Letterboxd watchlist (no auth).
Always ask the user for the Letterboxd username if they did not provide one.
## Script
- `scripts/scrape_watchlist.py`
### Basic usage
```bash
uv run scripts/scrape_watchlist.py <username> --out watchlist.csv
```
### Robust mode (recommended)
```bash
uv run scripts/scrape_watchlist.py <username> --out watchlist.jsonl --delay-ms 300 --timeout 30 --retries 2
```
### Output formats
- `--out *.csv` → `title,link`
- `--out *.jsonl` → one JSON object per line: `{ "title": "…", "link": "…" }`
## Notes / gotchas
- Letterboxd usernames are case-insensitive, but must be exact.
- The script scrapes paginated pages: `/watchlist/page/<n>/`.
- Stop condition: first page with **no** `data-target-link="/film/..."` poster entries.
- The scraper validates username format (`[A-Za-z0-9_-]+`) and uses retries + timeout.
- Default crawl delay is 250ms/page to be polite and reduce transient failures.
- This is best-effort HTML scraping; if Letterboxd changes markup, adjust the regex in the script.
## Scope boundary
- This skill only scrapes a public Letterboxd watchlist and writes CSV/JSONL output.
- Do not read local folders, scan libraries, or perform unrelated follow-up actions unless explicitly requested by the user.Related Skills
tavily-search
Use Tavily API for real-time web search and content extraction. Use when: user needs real-time web search results, research, or current information from the web. Requires Tavily API key.
baidu-search
Search the web using Baidu AI Search Engine (BDSE). Use for live information, documentation, or research topics.
notebooklm
Google NotebookLM 非官方 Python API 的 OpenClaw Skill。支持内容生成(播客、视频、幻灯片、测验、思维导图等)、文档管理和研究自动化。当用户需要使用 NotebookLM 生成音频概述、视频、学习材料或管理知识库时触发。
openclaw-search
Intelligent search for agents. Multi-source retrieval with confidence scoring - web, academic, and Tavily in one unified API.
aisa-tavily
AI-optimized web search via AIsa's Tavily API proxy. Returns concise, relevant results for AI agents through AIsa's unified API gateway.
Market Sizing — TAM/SAM/SOM Calculator
Build defensible market sizing for any product, pitch deck, or business case. Top-down and bottom-up methodologies combined.
Data Analyst — AfrexAI ⚡📊
**Transform raw data into decisions. Not just charts — answers.**
Competitor Monitor
Tracks and analyzes competitor moves — pricing changes, feature launches, hiring, and positioning shifts
afrexai-competitive-intel
Complete competitive intelligence system — market mapping, product teardowns, pricing intel, win/loss analysis, battlecards, and strategic monitoring. Goes far beyond SEO to cover the full business landscape.
trending-news-aggregator
智能热点新闻聚合器 - 自动抓取多平台热点新闻, AI分析趋势,支持定时推送和热度评分。 核心功能: - 每天自动聚合多平台热点(微博、知乎、百度等) - 智能分类(科技、财经、社会、国际等) - 热度评分算法 - 增量检测(标记新增热点) - AI趋势分析
search-cluster
Aggregated search aggregator using Google CSE, GNews RSS, Wikipedia, Reddit, and Scrapling.
data-analysis-partner
智能数据分析 Skill,输入 CSV/Excel 文件和分析需求,输出带交互式 ECharts 图表的 HTML 自包含分析报告