signal-scanner
Detect buying signals across TAM companies and watchlist personas. Three-phase architecture: (1) free diff-based signals from existing data (headcount growth, tech stack changes, funding rounds), (2) Apify-powered signals (job postings, LinkedIn content analysis, profile changes), and (3) post-processing with dedup, scoring, and lead status updates. Writes signals to Supabase signals table for downstream activation.
Best use case
signal-scanner is best used when you need a repeatable AI agent workflow instead of a one-off prompt.
Detect buying signals across TAM companies and watchlist personas. Three-phase architecture: (1) free diff-based signals from existing data (headcount growth, tech stack changes, funding rounds), (2) Apify-powered signals (job postings, LinkedIn content analysis, profile changes), and (3) post-processing with dedup, scoring, and lead status updates. Writes signals to Supabase signals table for downstream activation.
Teams using signal-scanner should expect a more consistent output, faster repeated execution, less prompt rewriting.
When to use this skill
- You want a reusable workflow that can be run more than once with consistent structure.
When not to use this skill
- You only need a quick one-off answer and do not need a reusable workflow.
- You cannot install or maintain the underlying files, dependencies, or repository context.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/signal-scanner/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How signal-scanner Compares
| Feature / Agent | signal-scanner | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | Unknown | N/A |
Frequently Asked Questions
What does this skill do?
Detect buying signals across TAM companies and watchlist personas. Three-phase architecture: (1) free diff-based signals from existing data (headcount growth, tech stack changes, funding rounds), (2) Apify-powered signals (job postings, LinkedIn content analysis, profile changes), and (3) post-processing with dedup, scoring, and lead status updates. Writes signals to Supabase signals table for downstream activation.
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
SKILL.md Source
# Signal Scanner
Scheduled scanner that detects buying signals on TAM companies and watchlist personas, writes them to the `signals` table, and sets up downstream activation.
## When to Use
- After TAM Builder has populated companies and personas
- As a recurring scan (daily/weekly) to detect timing-based outreach triggers
- When you need to move from static lists to intent-driven outreach
## Prerequisites
- `SUPABASE_URL` + `SUPABASE_SERVICE_ROLE_KEY` in `.env`
- `APIFY_TOKEN` in `.env` (for Phase 2 signals)
- `ANTHROPIC_API_KEY` in `.env` (optional, for LLM content analysis)
- TAM companies populated via `tam-builder`
- Watchlist personas created for Tier 1-2 companies
## Signal Types
| Priority | Signal | Level | Source | Cost |
|----------|--------|-------|--------|------|
| P0 | Headcount growth (>10% in 90d) | Company | Data diffs | Free |
| P0 | Tech stack changes | Company | Data diffs | Free |
| P0 | Funding round | Company | Data diffs | Free |
| P0 | Job posting for relevant roles | Company | Apify linkedin-job-search | ~$0.001/job |
| P1 | Leadership job change | Person | Apify linkedin-profile-scraper | ~$3/1k |
| P1 | LinkedIn content analysis | Person | Apify linkedin-profile-posts + LLM | ~$2/1k + LLM |
| P1 | LinkedIn profile updates | Person | Apify linkedin-profile-scraper | ~$3/1k |
| P2 | New C-suite hire | Company | Derived from person scans | Free |
## Config Format
See `configs/example.json` for full schema. Key sections:
- `client_name` — which client's TAM to scan
- `signals.*` — enable/disable each signal type with thresholds
- `scan_scope` — filter by tier, status, lead_status
## Database Write Policy
**CRITICAL: Never write signals or update lead statuses without explicit user approval.**
The signal scanner writes to multiple tables: `signals` (insert), `enrichment_log` (insert), `companies` (patch snapshots), and `people` (patch lead_status). These writes affect downstream outreach decisions — bad signals lead to bad outreach timing.
**Required flow:**
1. **Always run `--dry-run` first** to detect signals without writing to the database
2. Present the dry-run results to the user: signal count, types, top signals, affected companies/people
3. **Get explicit user approval** before running without `--dry-run`
4. Only then run the actual scan that writes to the database
**Why this matters:**
- Signals drive outreach timing — incorrect signals trigger premature outreach
- `lead_status` changes from `monitoring` to `signal_detected` are hard to undo across many records
- Snapshot updates affect future signal diffs — bad snapshots cascade into future scans
- Enrichment log entries track Apify credit spend
**The agent must NEVER pass `--yes` on a first run.** The `--yes` flag is only for pre-approved scheduled scans where the user has already validated the signal detection logic.
## Usage
```bash
# Dry run first (ALWAYS DO THIS) — detect signals without writing to DB
python skills/capabilities/signal-scanner/scripts/signal_scanner.py \
--config skills/capabilities/signal-scanner/configs/my-client.json --dry-run
# Full scan (only after user reviews dry-run results and approves)
python skills/capabilities/signal-scanner/scripts/signal_scanner.py \
--config skills/capabilities/signal-scanner/configs/my-client.json
# Test mode (5 companies max)
python skills/capabilities/signal-scanner/scripts/signal_scanner.py \
--config configs/example.json --test --dry-run
# Free signals only (skip Apify)
# Set all Apify signals to enabled: false in config
```
### Flags
| Flag | Effect |
|------|--------|
| `--config PATH` | Path to config JSON (required) |
| `--test` | Limit to 5 companies, 3 people |
| `--yes` | Auto-confirm Apify cost prompts. **Only use for pre-approved scheduled scans.** |
| `--dry-run` | Detect signals but don't write to DB. **Always run this first.** |
| `--max-runs N` | Override Apify run limit (default 50) |
## Output
### Signals table writes
Each signal includes: `client_name`, `company_id`, `person_id`, `signal_level` (company or person), `signal_type`, `signal_source`, `strength`, `signal_data` (JSON), `activation_score`, `detected_at`, `acted_on`, `run_id`.
### Other database writes
- Person `lead_status` updated to `signal_detected` when activation_score >= threshold
- Company `metadata._signal_snapshot` updated for next diff cycle
- Person `raw_data._signal_snapshot` updated for next diff cycle
- `enrichment_log` entries with `tool='apify'`, `action='search'` or `'enrich'`, plus `credits_used`
### Console output
- Summary stats printed to stdout
## Activation Score
```
activation_score = strength * recency_multiplier * account_fit
Recency: <24h = 1.5, 1-3d = 1.2, 3-7d = 1.0, 1-2w = 0.8, 2-4w = 0.5
Account: Tier 1 = 1.3, Tier 2 = 1.0, Tier 3 = 0.7
```
## Connects To
- **Upstream:** `tam-builder` (provides companies + people)
- **Downstream:** `cold-email-outreach` (acts on signals)
## File Structure
```
signal-scanner/
├── SKILL.md
├── configs/
│ └── example.json
└── scripts/
└── signal_scanner.py
```Related Skills
signal-detection-pipeline
Detect buying signals from multiple sources, qualify leads, and generate outreach context
github-repo-signals
Extract and score leads from GitHub repositories by analyzing stars, forks, issues, PRs, comments, and contributions. Produces unified multi-repo CSV with deduplicated user profiles. No paid API credits required.
event-signals
Extract leads from conferences, meetups, hackathons, and podcasts by analyzing speaker lists, sponsor lists, hackathon entries, and podcast guests. Discovers events via Sessionize, Confs.tech, Meetup, Luma, ListenNotes, and Devpost. Looks back 90 days and forward 180 days.
competitor-signals
Extract leads from competitor product activity — Product Hunt commenters/upvoters, HN posts about competitors, case studies, testimonials, tech press, and switching signals. Detects people actively switching from competitors as highest-priority leads.
community-signals
Extract leads from developer forums (Hacker News, Reddit) by detecting intent signals — alternative seeking, competitor pain, scaling challenges, DIY solutions, and migration intent. Scores users by intent strength and cross-platform presence.
newsletter-signal-scanner
Subscribe to and scan industry newsletters for buying signals, competitor mentions, ICP pain-point language, and market shifts. Parses incoming newsletter emails via AgentMail, matches against keyword campaigns, and delivers a weekly digest of actionable signals. Use when a marketing team wants to turn newsletter subscriptions into an ongoing intelligence feed without manual reading.
news-signal-outreach
End-to-end news-triggered signal composite. Takes any piece of news — an article, LinkedIn post, tweet, announcement, event, trend, regulation, product launch, acquisition, layoff, expansion, or any other public event — and evaluates whether the companies or people mentioned are ICP fits. If yes, identifies the connection between the news and your product, finds the right people to contact, and drafts personalized outreach using the news as the hook. Tool-agnostic. Accepts both company-level and person-level news triggers. AUTO-TRIGGER: Load this composite whenever a user shares a URL (LinkedIn post, article, tweet, blog post) or mentions a company/person they "came across", "saw", or "found" from any external source and asks about relevance, fit, ICP match, or whether to reach out. The user does NOT need to explicitly say "outreach" — any signal evaluation request from an external source triggers this.
industry-scanner
Daily industry intelligence scanner. Scans web, social media, news, blogs, and communities for industry-relevant events, trends, and signals. Produces a comprehensive intelligence briefing plus strategic GTM opportunity ideas. Orchestrates existing scraping skills — does not reimplement data collection.
hiring-signal-outreach
End-to-end hiring signal composite. Takes any set of companies, detects job postings that your product augments or replaces, finds relevant people (the hiring manager, buyers, champions, users), and drafts personalized outreach using the job role as the hook. Tool-agnostic — works with any company source, job board, contact finder, and outreach platform.
funding-signal-outreach
End-to-end funding signal composite. Takes any set of companies, detects recent funding events, qualifies against your company context, finds relevant people (buyers, champions, users), and drafts personalized outreach. Tool-agnostic — works with any company source, contact finder, and outreach platform.
funding-signal-monitor
Monitor web sources for Series A-C funding announcements. Aggregates signals from TechCrunch, Crunchbase (via web search), Twitter, Hacker News, and LinkedIn. Filters by stage, amount, and industry. Returns qualified recently-funded companies ready for outreach.
expansion-signal-spotter
Monitor existing customer accounts for upsell and cross-sell signals: team growth on LinkedIn, new job postings, product usage patterns, funding announcements, and public company news. Produces a weekly expansion opportunity list with context and talk tracks. Chains web search, LinkedIn profile monitoring, and job posting detection.