google-ad-scraper
Scrape competitor ads from Google's Ads Transparency Center (Search, YouTube, Display, Gmail). Search by company name, domain, or advertiser ID. Returns ad creatives, formats, targeting regions, and campaign details. Use for competitive ad research and messaging analysis.
Best use case
google-ad-scraper is best used when you need a repeatable AI agent workflow instead of a one-off prompt.
Scrape competitor ads from Google's Ads Transparency Center (Search, YouTube, Display, Gmail). Search by company name, domain, or advertiser ID. Returns ad creatives, formats, targeting regions, and campaign details. Use for competitive ad research and messaging analysis.
Teams using google-ad-scraper should expect a more consistent output, faster repeated execution, less prompt rewriting.
When to use this skill
- You want a reusable workflow that can be run more than once with consistent structure.
When not to use this skill
- You only need a quick one-off answer and do not need a reusable workflow.
- You cannot install or maintain the underlying files, dependencies, or repository context.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/google-ad-scraper/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How google-ad-scraper Compares
| Feature / Agent | google-ad-scraper | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | Unknown | N/A |
Frequently Asked Questions
What does this skill do?
Scrape competitor ads from Google's Ads Transparency Center (Search, YouTube, Display, Gmail). Search by company name, domain, or advertiser ID. Returns ad creatives, formats, targeting regions, and campaign details. Use for competitive ad research and messaging analysis.
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
Related Guides
SKILL.md Source
# Google Ads Transparency Scraper
Scrape ads from Google's Ads Transparency Center using the Apify `xtech/google-ad-transparency-scraper` actor. Covers Search, YouTube, Display, and Gmail ads.
## Quick Start
Requires `APIFY_API_TOKEN` env var (or `--token` flag). Install dependency: `pip install requests`.
```bash
# Search by company name (auto-resolves advertiser ID)
python3 skills/google-ad-scraper/scripts/search_google_ads.py \
--company "Nike"
# Search by domain (more precise)
python3 skills/google-ad-scraper/scripts/search_google_ads.py \
--domain "nike.com"
# Direct advertiser ID (skip lookup step)
python3 skills/google-ad-scraper/scripts/search_google_ads.py \
--advertiser-id "AR13129532367502835713"
# With region filter
python3 skills/google-ad-scraper/scripts/search_google_ads.py \
--company "Shopify" --region US
# Limit results
python3 skills/google-ad-scraper/scripts/search_google_ads.py \
--domain "hubspot.com" --max-ads 30
# Human-readable summary
python3 skills/google-ad-scraper/scripts/search_google_ads.py \
--company "Stripe" --output summary
```
## How It Works
1. **Advertiser Resolution** (if no `--advertiser-id` provided):
- Takes company name or domain
- Searches Google Ads Transparency Center using Apify's web-scraper (Puppeteer)
- Extracts advertiser ID(s) from search results (format: `AR` + 20 digits)
2. **Ad Scraping**:
- Constructs transparency center URL for the advertiser
- Calls the Apify `xtech/google-ad-transparency-scraper` actor
- Polls until complete, fetches dataset
3. **Output**: Returns ads as JSON or human-readable summary
## Advertiser ID Resolution
The script handles the name → ID lookup automatically:
- **By domain** (`--domain nike.com`): Searches `adstransparency.google.com/?domain=nike.com`. Most reliable method.
- **By name** (`--company "Nike"`): Searches `adstransparency.google.com/?text=Nike`. May return multiple matches.
- **Direct ID** (`--advertiser-id AR...`): Skips lookup entirely. Use when you already have the ID.
### Finding the Advertiser ID Manually
If auto-resolution fails:
1. Go to https://adstransparency.google.com
2. Search for the company
3. Click on the advertiser
4. Copy the ID from the URL: `https://adstransparency.google.com/advertiser/AR17828074650563772417`
5. Pass it via `--advertiser-id AR17828074650563772417`
## CLI Reference
| Flag | Default | Description |
|------|---------|-------------|
| `--company` | none | Company name to search |
| `--domain` | none | Company domain (e.g. nike.com) — more precise |
| `--advertiser-id` | none | Google Ads advertiser ID(s), comma-separated (skips lookup) |
| `--region` | anywhere | Region filter (US, GB, DE, etc. or "anywhere") |
| `--max-ads` | 50 | Maximum number of ads to return |
| `--output` | json | Output format: `json` or `summary` |
| `--token` | env var | Apify token (prefer `APIFY_API_TOKEN` env var) |
| `--timeout` | 300 | Max seconds to wait for Apify run |
At least one of `--company`, `--domain`, or `--advertiser-id` is required.
## Output Fields
Each ad in the output may contain (varies by ad format):
```json
{
"advertiser_name": "Nike, Inc.",
"advertiser_id": "AR13129532367502835713",
"ad_format": "TEXT",
"headline": "Nike.com - Official Site",
"description": "Shop the latest Nike shoes, clothing...",
"display_url": "nike.com",
"destination_url": "https://www.nike.com/",
"region": "United States",
"last_shown": "2026-02-20",
"first_shown": "2026-01-15",
"image_url": "https://...",
"video_url": "https://..."
}
```
## Cost
- Advertiser lookup: ~$0.05 (one web-scraper page)
- Ad scraping: Varies by actor pricing, typically a few cents per advertiser
## Common Workflows
### 1. Competitor Ad Research
```bash
python3 skills/google-ad-scraper/scripts/search_google_ads.py \
--domain "competitor.com" --max-ads 100 --output summary
```
### 2. Compare Multiple Competitors
```bash
# Get IDs first, then scrape in one run
python3 skills/google-ad-scraper/scripts/search_google_ads.py \
--advertiser-id "AR111,AR222,AR333" --max-ads 50
```
### 3. Regional Ad Targeting Analysis
```bash
# See what ads run in specific regions
python3 skills/google-ad-scraper/scripts/search_google_ads.py \
--domain "shopify.com" --region US --output summary
python3 skills/google-ad-scraper/scripts/search_google_ads.py \
--domain "shopify.com" --region GB --output summary
```
## Limitations
- **Advertiser ID lookup** uses Puppeteer-based web scraping of Google's SPA. It may occasionally fail — use `--domain` for best results or provide `--advertiser-id` directly.
- **Ad coverage**: Google only shows ads from verified advertisers. Some smaller advertisers may not appear.
- **Historical data**: The Transparency Center primarily shows recently active ads.
## Configuration
See `references/apify-config.md` for detailed API configuration, token setup, and rate limits.Related Skills
google-search-ads-builder
End-to-end Google Search Ads campaign builder. Takes ICP + product info, performs keyword research via competitive analysis, builds ad group structure, generates headline/description variants, creates negative keyword lists, recommends bid strategy, and exports a campaign-ready CSV for Google Ads Editor import.
web-archive-scraper
Search the Wayback Machine for archived versions of websites. Extract cached pages, customer lists, testimonials, and partner directories from sites that have changed or gone offline. Uses the free CDX API — no API key needed.
twitter-scraper
Search and scrape Twitter/X posts using Apify. Use when you need to find tweets, track brand mentions, monitor competitors on Twitter, or analyze Twitter discussions. Uses Twitter native search syntax (since:/until:) for reliable date filtering.
review-scraper
Scrape product reviews from G2, Capterra, and Trustpilot using Apify. Single script with platform dispatch. Use when you need to monitor competitor reviews, track product sentiment, or gather customer feedback from review sites.
reddit-scraper
Scrape and search Reddit posts using Apify. Use when you need to find Reddit discussions, track competitor mentions, monitor product feedback, discover pain points, or analyze subreddit content. Supports keyword filtering, time-based searches, and subreddit-specific queries.
product-hunt-scraper
Scrape Product Hunt trending products using Apify. Use when you need to discover new product launches, track competitors on Product Hunt, or monitor the startup ecosystem for relevant launches.
meta-ad-scraper
Scrape competitor ads from Meta's Ad Library (Facebook, Instagram, Messenger, Threads, WhatsApp). Search by company name, Facebook Page URL, or keyword. Returns ad creatives, spend estimates, reach, impressions, and campaign details. Use for competitive ad research, messaging analysis, and creative inspiration.
linkedin-profile-post-scraper
Scrape recent posts from LinkedIn profiles using Apify. Use when you need to monitor what specific people are posting on LinkedIn, track founder/exec activity, or gather LinkedIn content for competitive intelligence.
linkedin-job-scraper
Scrapes LinkedIn job postings using the JobSpy library (python-jobspy). Use this skill whenever the user wants to find jobs on LinkedIn, search for open roles, pull job listings, build a job pipeline, source job targets for GTM research, or monitor hiring signals. Even if the user just says "find me some jobs" or "what roles is [company] hiring for", use this skill. It runs a local Python script that outputs a CSV of job postings with title, company, location, salary, job type, description, and direct URLs.
hacker-news-scraper
Search Hacker News stories and comments using the free Algolia API. No Apify token needed. Use when you need to find HN discussions, track mentions, discover Show HN launches, or monitor tech community sentiment.
conference-speaker-scraper
Extract speaker names, titles, companies, and bios from conference websites. Supports direct HTML scraping and Apify web scraper fallback for JS-heavy sites. Use for pre-event research and outreach targeting.
blog-scraper
Scrape blog posts via RSS feeds (free, no API key) with Apify fallback for JS-heavy sites. Use when you need to monitor competitor blogs, track industry content, or aggregate blog posts by keyword.