blog-feed-monitor
Scrape blog posts via RSS feeds (free, no API key) with Apify fallback for JS-heavy sites. Use when you need to monitor competitor blogs, track industry content, or aggregate blog posts by keyword.
Best use case
blog-feed-monitor is best used when you need a repeatable AI agent workflow instead of a one-off prompt.
Scrape blog posts via RSS feeds (free, no API key) with Apify fallback for JS-heavy sites. Use when you need to monitor competitor blogs, track industry content, or aggregate blog posts by keyword.
Teams using blog-feed-monitor should expect a more consistent output, faster repeated execution, less prompt rewriting.
When to use this skill
- You want a reusable workflow that can be run more than once with consistent structure.
When not to use this skill
- You only need a quick one-off answer and do not need a reusable workflow.
- You cannot install or maintain the underlying files, dependencies, or repository context.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/blog-feed-monitor/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How blog-feed-monitor Compares
| Feature / Agent | blog-feed-monitor | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | Unknown | N/A |
Frequently Asked Questions
What does this skill do?
Scrape blog posts via RSS feeds (free, no API key) with Apify fallback for JS-heavy sites. Use when you need to monitor competitor blogs, track industry content, or aggregate blog posts by keyword.
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
SKILL.md Source
# Blog Feed Monitor Scrape blog posts via RSS/Atom feeds (free) with optional Apify fallback for JS-heavy sites. ## Quick Start No API key needed for RSS mode. ```bash # Scrape a blog's RSS feed python3 skills/blog-feed-monitor/scripts/scrape_blogs.py \ --urls "https://example.com/blog" --days 30 # Multiple blogs with keyword filter python3 skills/blog-feed-monitor/scripts/scrape_blogs.py \ --urls "https://blog1.com,https://blog2.com" --keywords "AI,marketing" --output summary # Force Apify for JS-heavy sites python3 skills/blog-feed-monitor/scripts/scrape_blogs.py \ --urls "https://example.com" --mode apify ``` ## How It Works ### Auto Mode (default) 1. For each URL, tries to discover an RSS/Atom feed: - Checks HTML `<link rel="alternate">` tags - Probes common paths: `/feed`, `/rss`, `/atom.xml`, `/feed.xml`, `/rss.xml`, `/blog/feed`, `/index.xml` 2. Parses discovered feeds (supports RSS 2.0 and Atom) 3. If any URLs fail, falls back to Apify `jupri/rss-xml-scraper` (if token available) 4. Applies date and keyword filtering client-side > **Note:** The Apify fallback actor `jupri/rss-xml-scraper` may need updating -- it has not been verified recently. RSS mode works reliably without it. ### RSS Mode Only tries RSS feeds, no Apify fallback. ### Apify Mode Uses Apify actor directly, skipping RSS discovery. ## CLI Reference | Flag | Default | Description | |------|---------|-------------| | `--urls` | *required* | Blog URL(s), comma-separated | | `--keywords` | none | Keywords to filter (comma-separated, OR logic) | | `--days` | 30 | Only include posts from last N days | | `--max-posts` | 50 | Max posts to return | | `--mode` | auto | `auto` (RSS + fallback), `rss` (RSS only), `apify` (Apify only) | | `--output` | json | Output format: `json` or `summary` | | `--token` | env var | Apify token (only needed for Apify mode/fallback) | | `--timeout` | 300 | Max seconds for Apify run | ## Cost - **RSS mode:** Free (no API, no tokens) - **Apify mode:** Uses `jupri/rss-xml-scraper` -- minimal Apify credits
Related Skills
competitor-monitoring-system
Set up and run ongoing competitive intelligence monitoring for a client. Tracks competitor content, ads, reviews, social, and product moves.
blog-scraper
Scrape blog posts via RSS feeds (free, no API key) with Apify fallback for JS-heavy sites. Use when you need to monitor competitor blogs, track industry content, or aggregate blog posts by keyword.
newsletter-monitor
Scan an AgentMail inbox for newsletter signals using configurable keyword campaigns. Extracts matched keywords, context snippets, and company mentions from incoming emails. Use for monitoring accounting industry newsletters for buying signals like acquisitions, migrations, and staffing news.
kol-content-monitor
Track what key opinion leaders (KOLs) in your space are posting on LinkedIn and Twitter/X. Surfaces trending narratives, high-engagement topics, and early signals of emerging conversations before they peak. Chains linkedin-profile-post-scraper and twitter-mention-tracker. Use when a marketing team wants to ride trends rather than create them from scratch, or when a founder wants to know which topics are resonating with their audience.
funding-signal-monitor
Monitor web sources for Series A-C funding announcements. Aggregates signals from TechCrunch, Crunchbase (via web search), Twitter, Hacker News, and LinkedIn. Filters by stage, amount, and industry. Returns qualified recently-funded companies ready for outreach.
orthogonal-uptime-monitor
Monitor website uptime - check availability, response times, and status
client-packet-engine
Batch client packet generator. Takes company names/URLs, runs intelligence + strategy generation, presents strategies for human selection, executes selected strategies in pitch-packet mode (no live campaigns or paid enrichment), and packages into local delivery packets.
client-package-notion
Package all work done for a client into a shareable Notion page with subpages and Google Sheets. Reads the client's folder (strategies, campaigns, content, leads, notes) and builds a structured Notion workspace the client can browse. Lead list CSVs are uploaded to Google Sheets and linked from the Notion pages. Use when you want to deliver work to a client in a polished, navigable format.
client-package-local
Package all work done for a client into a local filesystem delivery package with .md files and Google Sheets. Reads the client's folder (strategies, campaigns, content, leads, notes) and builds a structured directory with dated deliverables. Lead lists are uploaded to Google Sheets and linked from the markdown files. Use when you want to deliver work to a client in a polished, navigable format without requiring Notion.
client-onboarding
Full client onboarding: intelligence gathering, synthesis into Client Intelligence Package, and growth strategy generation. Phases 1-3 of the Client Launch Playbook.
lead-discovery
Orchestrator that runs first for lead generation requests. Gathers business context via website analysis or questions, identifies competitors, builds ICP, and routes to signal skills with pre-filled inputs.
serp-feature-sniper
Analyze SERP features per keyword (featured snippets, PAA, video carousels, knowledge panels, image packs) and produce optimized content structures to win them. Identifies which features are winnable, who currently holds them, and exactly how to format your content to steal them.