seo-domain-analyzer
Pull real SEO metrics for any domain using Apify scrapers for Semrush and Ahrefs data. Gets domain authority, organic traffic estimates, keyword rankings, backlink profiles, top performing pages, and auto-discovers competitors from keyword overlap. No Semrush/Ahrefs subscription needed — uses Apify actors that scrape public pages.
Best use case
seo-domain-analyzer is best used when you need a repeatable AI agent workflow instead of a one-off prompt.
Pull real SEO metrics for any domain using Apify scrapers for Semrush and Ahrefs data. Gets domain authority, organic traffic estimates, keyword rankings, backlink profiles, top performing pages, and auto-discovers competitors from keyword overlap. No Semrush/Ahrefs subscription needed — uses Apify actors that scrape public pages.
Teams using seo-domain-analyzer should expect a more consistent output, faster repeated execution, less prompt rewriting.
When to use this skill
- You want a reusable workflow that can be run more than once with consistent structure.
When not to use this skill
- You only need a quick one-off answer and do not need a reusable workflow.
- You cannot install or maintain the underlying files, dependencies, or repository context.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/seo-domain-analyzer/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How seo-domain-analyzer Compares
| Feature / Agent | seo-domain-analyzer | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | Unknown | N/A |
Frequently Asked Questions
What does this skill do?
Pull real SEO metrics for any domain using Apify scrapers for Semrush and Ahrefs data. Gets domain authority, organic traffic estimates, keyword rankings, backlink profiles, top performing pages, and auto-discovers competitors from keyword overlap. No Semrush/Ahrefs subscription needed — uses Apify actors that scrape public pages.
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
SKILL.md Source
# SEO Domain Analyzer
Pull real SEO performance data for any domain — no Semrush or Ahrefs subscription needed. Uses Apify actors that scrape Semrush/Ahrefs public pages to get authority scores, traffic estimates, keyword rankings, backlink profiles, and competitor discovery.
## Quick Start
```bash
# Basic domain analysis
python3 skills/seo-domain-analyzer/scripts/analyze_domain.py \
--domain "example.com"
# With competitor comparison
python3 skills/seo-domain-analyzer/scripts/analyze_domain.py \
--domain "example.com" \
--competitors "competitor1.com,competitor2.com,competitor3.com"
# Check specific keywords
python3 skills/seo-domain-analyzer/scripts/analyze_domain.py \
--domain "example.com" \
--keywords "cloud cost optimization,reduce aws bill,finops tools"
# Save output
python3 skills/seo-domain-analyzer/scripts/analyze_domain.py \
--domain "example.com" --output clients/acme/research/seo-profile.json
```
## Inputs
| Parameter | Required | Default | Description |
|-----------|----------|---------|-------------|
| domain | Yes | — | Domain to analyze (e.g., "example.com") |
| competitors | No | auto-discovered | Comma-separated competitor domains |
| keywords | No | auto-inferred | Specific keywords to check rankings for |
| output | No | stdout | Path to save JSON output |
| skip-backlinks | No | false | Skip Ahrefs backlink analysis (saves ~$0.10) |
## Cost
| Data Source | Apify Actor | Est. Cost |
|-------------|------------|-----------|
| Domain overview (Semrush) | `devnaz/semrush-scraper` | ~$0.10/domain |
| Backlink profile (Ahrefs) | `radeance/ahrefs-scraper` | ~$0.10/domain |
| Keyword rank checks | `apify/google-search-scraper` | ~$0.002/keyword |
| **Typical full run** | | **~$0.50-1.00** |
| **With 3 competitors** | | **~$1.50-3.00** |
## Process
### Phase 1: Domain Overview (Semrush Data)
Use Apify actor `devnaz/semrush-scraper` to get:
```python
# Actor: devnaz/semrush-scraper
# Input: domain URL
{
"urls": ["https://example.com"]
}
```
**Extracted metrics:**
- **Authority Score** (0-100)
- **Organic monthly traffic** estimate
- **Organic keywords count** (how many keywords the domain ranks for)
- **Paid traffic** estimate (if any)
- **Backlinks count** (Semrush's count)
- **Referring domains count**
- **Top organic keywords** (keyword, position, traffic share)
- **Top competitors** (competing domains by keyword overlap)
- **Traffic trend** (month-over-month direction)
### Phase 2: Backlink Profile (Ahrefs Data)
Use Apify actor `radeance/ahrefs-scraper` to get:
```python
# Actor: radeance/ahrefs-scraper
# Input: domain for backlink analysis
{
"urls": ["https://example.com"],
"mode": "domain-overview"
}
```
**Extracted metrics:**
- **Domain Rating (DR)** (0-100)
- **URL Rating** of homepage
- **Referring domains** count and trend
- **Backlinks** total count
- **Top referring domains** (which sites link to them)
- **Anchor text distribution** (branded vs keyword vs generic)
- **Dofollow vs nofollow ratio**
### Phase 3: Keyword Rank Verification
For specific keywords (user-provided or auto-inferred from Phase 1), verify actual rankings using Google search:
```python
# Actor: apify/google-search-scraper
# Input: keyword queries
{
"queries": "cloud cost optimization",
"maxPagesPerQuery": 1,
"resultsPerPage": 10,
"countryCode": "us",
"languageCode": "en"
}
```
**For each keyword:**
- Does the target domain appear in top 10?
- What position?
- What specific URL ranks?
- Who else ranks? (competitive landscape for that keyword)
**Keyword sources (in priority order):**
1. User-provided keywords
2. Top organic keywords from Semrush data (Phase 1)
3. Auto-inferred from domain content (WebSearch `site:[domain]` to see page titles)
### Phase 4: Top Pages Analysis
From the Semrush data, extract the highest-traffic pages:
- URL
- Estimated monthly traffic
- Primary keyword(s) driving traffic
- Number of ranking keywords
If Semrush doesn't provide per-page data, supplement with:
- WebSearch: `site:[domain]` and note which pages appear first (proxy for importance)
- WebSearch: `site:[domain] blog` for top blog content
### Phase 5: Competitor Discovery
Competitors are identified from multiple sources:
1. **Semrush competitor data** (Phase 1) — domains competing for same keywords
2. **User-provided competitors** — always included
3. **Google SERP competitors** — from Phase 3 keyword checks, note which domains consistently appear
For each competitor, run a lighter version of Phase 1 (domain overview only):
- Authority score
- Organic traffic estimate
- Keyword count
- Top keywords
### Phase 6: Output
#### JSON Output
```json
{
"domain": "example.com",
"analysis_date": "2026-02-25",
"domain_metrics": {
"semrush_authority_score": 45,
"ahrefs_domain_rating": 52,
"organic_monthly_traffic": 28500,
"organic_keywords": 1240,
"backlinks": 8930,
"referring_domains": 412,
"traffic_trend": "increasing"
},
"top_pages": [
{
"url": "https://example.com/blog/reduce-aws-costs",
"estimated_traffic": 3200,
"top_keyword": "reduce aws costs",
"ranking_keywords": 45
}
],
"keyword_rankings": [
{
"keyword": "cloud cost optimization",
"position": 4,
"url": "https://example.com/blog/cloud-cost-optimization-guide",
"serp_competitors": ["vantage.sh", "antimetal.com", "finout.io"]
}
],
"backlink_profile": {
"domain_rating": 52,
"total_backlinks": 8930,
"referring_domains": 412,
"dofollow_ratio": 0.78,
"top_referring_domains": ["techcrunch.com", "producthunt.com", ...],
"anchor_text_distribution": {
"branded": 0.45,
"keyword": 0.22,
"generic": 0.18,
"url": 0.15
}
},
"competitors": [
{
"domain": "competitor1.com",
"authority_score": 62,
"organic_traffic": 45000,
"organic_keywords": 2100,
"keyword_overlap": 340
}
]
}
```
#### Markdown Summary (also generated)
```markdown
# SEO Domain Profile: example.com
**Date:** 2026-02-25
## Domain Metrics
| Metric | Value |
|--------|-------|
| Semrush Authority Score | 45/100 |
| Ahrefs Domain Rating | 52/100 |
| Monthly Organic Traffic | ~28,500 |
| Organic Keywords | 1,240 |
| Backlinks | 8,930 |
| Referring Domains | 412 |
| Traffic Trend | Increasing |
## Top Performing Pages
| # | URL | Est. Traffic | Top Keyword |
|---|-----|-------------|-------------|
| 1 | /blog/reduce-aws-costs | 3,200 | reduce aws costs |
| ... |
## Keyword Rankings
| Keyword | Position | URL | SERP Competitors |
|---------|----------|-----|-----------------|
| cloud cost optimization | #4 | /blog/cloud-cost... | vantage.sh, antimetal.com |
| ... |
## Backlink Profile
- Domain Rating: 52/100
- Referring Domains: 412
- Dofollow Ratio: 78%
- Top linking sites: TechCrunch, Product Hunt, ...
## Competitor Comparison
| Domain | Authority | Traffic | Keywords | Overlap |
|--------|-----------|---------|----------|---------|
| example.com | 45 | 28.5K | 1,240 | — |
| competitor1.com | 62 | 45K | 2,100 | 340 |
| ... |
```
## Tips
- **Semrush scraper data quality varies.** The Apify actors scrape public Semrush pages, which show limited data for non-subscribers. Traffic estimates and top keywords are available, but detailed per-page breakdowns may be partial.
- **Combine with site-content-catalog** to get both the content inventory AND performance data — together they tell you what content exists AND which pieces actually drive traffic.
- **Keyword rank verification via Google** is the most reliable data point. Semrush/Ahrefs estimates can be off, but checking actual SERPs gives ground truth.
- **Run competitors lighter.** Full backlink analysis on 5 competitors gets expensive. Domain overview (Semrush only) is usually sufficient for comparison.
- **Apify actors may break.** These scrape Semrush/Ahrefs public pages, which can change. If an actor fails, fall back to the free `seo-traffic-analyzer` skill which uses web search probes.
## Fallback: Free Mode
If `APIFY_API_TOKEN` is not set or Apify actors fail, the script falls back to:
1. WebSearch probes (like `seo-traffic-analyzer` skill)
2. `site:[domain]` for indexed page count
3. SimilarWeb free tier for traffic estimates
4. Manual Google SERP checks for keyword rankings
This gives less precise data but still produces a useful report.
## Dependencies
- Python 3.8+
- `requests` library
- `APIFY_API_TOKEN` env var (for Apify mode; falls back to free probes without it)Related Skills
ad-campaign-analyzer
Analyze ad campaign performance data (Google, Meta, LinkedIn) to identify what's working, what's wasting budget, and specific cut/scale/test recommendations. Takes CSV or pasted data, runs statistical analysis, and produces a diagnostic report with action items.
seo-traffic-analyzer
Analyze a website's SEO visibility, keyword rankings, traffic estimates, and competitive positioning. Uses web search probes, SimilarWeb (free tier via web), and site: queries to build an SEO profile without requiring paid tool subscriptions. Useful for competitive intel, gap analysis, and reverse-engineering a company's organic acquisition strategy.
signal-detection-pipeline
Detect buying signals from multiple sources, qualify leads, and generate outreach context
seo-content-engine
Build and run an SEO content engine: audit current state, identify gaps, build keyword architecture, generate content calendar, draft content.
outbound-prospecting-engine
End-to-end outbound prospecting: detect intent signals, research companies, find decision-maker contacts, personalize messaging, launch campaign.
event-prospecting-pipeline
Find attendees at conferences/events, research their companies, qualify against ICP, and launch outreach
competitor-monitoring-system
Set up and run ongoing competitive intelligence monitoring for a client. Tracks competitor content, ads, reviews, social, and product moves.
client-packet-engine
Batch client packet generator. Takes company names/URLs, runs intelligence + strategy generation, presents strategies for human selection, executes selected strategies in pitch-packet mode (no live campaigns or paid enrichment), and packages into local delivery packets.
client-package-notion
Package all work done for a client into a shareable Notion page with subpages and Google Sheets. Reads the client's folder (strategies, campaigns, content, leads, notes) and builds a structured Notion workspace the client can browse. Lead list CSVs are uploaded to Google Sheets and linked from the Notion pages. Use when you want to deliver work to a client in a polished, navigable format.
client-package-local
Package all work done for a client into a local filesystem delivery package with .md files and Google Sheets. Reads the client's folder (strategies, campaigns, content, leads, notes) and builds a structured directory with dated deliverables. Lead lists are uploaded to Google Sheets and linked from the markdown files. Use when you want to deliver work to a client in a polished, navigable format without requiring Notion.
client-onboarding
Full client onboarding: intelligence gathering, synthesis into Client Intelligence Package, and growth strategy generation. Phases 1-3 of the Client Launch Playbook.
lead-discovery
Orchestrator that runs first for lead generation requests. Gathers business context via website analysis or questions, identifies competitors, builds ICP, and routes to signal skills with pre-filled inputs.