brightdata-core-workflow-b
Execute Bright Data secondary workflow: Core Workflow B. Use when implementing secondary use case, or complementing primary workflow. Trigger with phrases like "brightdata secondary workflow", "secondary task with brightdata".
Best use case
brightdata-core-workflow-b is best used when you need a repeatable AI agent workflow instead of a one-off prompt.
Execute Bright Data secondary workflow: Core Workflow B. Use when implementing secondary use case, or complementing primary workflow. Trigger with phrases like "brightdata secondary workflow", "secondary task with brightdata".
Teams using brightdata-core-workflow-b should expect a more consistent output, faster repeated execution, less prompt rewriting.
When to use this skill
- You want a reusable workflow that can be run more than once with consistent structure.
When not to use this skill
- You only need a quick one-off answer and do not need a reusable workflow.
- You cannot install or maintain the underlying files, dependencies, or repository context.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/brightdata-core-workflow-b/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How brightdata-core-workflow-b Compares
| Feature / Agent | brightdata-core-workflow-b | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | Unknown | N/A |
Frequently Asked Questions
What does this skill do?
Execute Bright Data secondary workflow: Core Workflow B. Use when implementing secondary use case, or complementing primary workflow. Trigger with phrases like "brightdata secondary workflow", "secondary task with brightdata".
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
SKILL.md Source
# Bright Data SERP API & Web Scraper API
## Overview
Collect search engine results and trigger large-scale data collections using Bright Data's SERP API and Web Scraper API. SERP API returns structured JSON from Google, Bing, Yahoo, and other search engines. Web Scraper API triggers asynchronous collections with webhook delivery.
## Prerequisites
- Completed `brightdata-install-auth` setup
- SERP API zone or Web Scraper API dataset configured
- API token from Settings > API tokens
## Instructions
### Step 1: SERP API — Synchronous Google Search
```typescript
// serp-api.ts
import 'dotenv/config';
const { BRIGHTDATA_CUSTOMER_ID, BRIGHTDATA_ZONE, BRIGHTDATA_ZONE_PASSWORD } = process.env;
async function searchGoogle(query: string, country = 'us') {
// SERP API uses the proxy protocol with JSON response
const username = `brd-customer-${BRIGHTDATA_CUSTOMER_ID}-zone-${BRIGHTDATA_ZONE}-country-${country}`;
const response = await fetch(
`https://www.google.com/search?q=${encodeURIComponent(query)}&brd_json=1`,
{
headers: {
'Proxy-Authorization': `Basic ${Buffer.from(`${username}:${BRIGHTDATA_ZONE_PASSWORD}`).toString('base64')}`,
},
}
);
const results = await response.json();
console.log(`Query: "${query}"`);
console.log(`Results: ${results.organic?.length || 0} organic`);
for (const r of results.organic?.slice(0, 5) || []) {
console.log(` ${r.rank}. ${r.title} — ${r.link}`);
}
return results;
}
searchGoogle('bright data web scraping').catch(console.error);
```
### Step 2: SERP API — Structured JSON Response
The SERP API returns structured data when you append `&brd_json=1`:
```typescript
interface SERPResponse {
organic: Array<{
rank: number;
title: string;
link: string;
description: string;
displayed_link: string;
}>;
paid?: Array<{ title: string; link: string; description: string }>;
knowledge_graph?: { title: string; description: string };
related_searches?: string[];
total_results?: number;
}
```
### Step 3: Web Scraper API — Async Collection with Webhook
```typescript
// web-scraper-api.ts — trigger large-scale collections
import 'dotenv/config';
const API_TOKEN = process.env.BRIGHTDATA_API_TOKEN!;
async function triggerCollection(
datasetId: string,
urls: string[],
webhookUrl?: string
) {
const params = new URLSearchParams({
dataset_id: datasetId,
format: 'json',
uncompressed_webhook: 'true',
});
if (webhookUrl) params.set('endpoint', webhookUrl);
const response = await fetch(
`https://api.brightdata.com/datasets/v3/trigger?${params}`,
{
method: 'POST',
headers: {
'Authorization': `Bearer ${API_TOKEN}`,
'Content-Type': 'application/json',
},
body: JSON.stringify(urls.map(url => ({ url }))),
}
);
const result = await response.json();
console.log('Collection triggered:', result.snapshot_id);
return result;
}
// Check collection status
async function getCollectionStatus(snapshotId: string) {
const response = await fetch(
`https://api.brightdata.com/datasets/v3/snapshot/${snapshotId}?format=json`,
{ headers: { 'Authorization': `Bearer ${API_TOKEN}` } },
);
if (response.status === 200) {
const data = await response.json();
console.log('Collection complete:', data.length, 'records');
return data;
} else if (response.status === 202) {
console.log('Collection still running...');
return null;
}
}
```
### Step 4: Python SERP API
```python
# serp_api.py
import os, requests
from dotenv import load_dotenv
load_dotenv()
API_TOKEN = os.environ['BRIGHTDATA_API_TOKEN']
def search_google(query: str, country: str = 'us'):
"""Trigger a SERP API collection via REST."""
resp = requests.post(
'https://api.brightdata.com/datasets/v3/trigger',
params={'dataset_id': 'gd_lwdb4vjm1ehb499uxs', 'format': 'json'},
headers={'Authorization': f'Bearer {API_TOKEN}', 'Content-Type': 'application/json'},
json=[{'keyword': query, 'country': country, 'engine': 'google'}],
)
print(f"Snapshot ID: {resp.json().get('snapshot_id')}")
return resp.json()
```
## Output
- Structured SERP results in JSON with organic, paid, and knowledge graph data
- Async collection snapshot IDs for large-scale scraping
- Webhook delivery of completed datasets
## Error Handling
| Error | Cause | Solution |
|-------|-------|----------|
| `401 Unauthorized` | Invalid API token | Regenerate at Settings > API tokens |
| `400 Bad Request` | Invalid dataset_id | Check dataset ID in control panel |
| `202 Accepted` polling | Collection in progress | Poll every 10s until 200 |
| Rate limited | Too many triggers | Max 20 triggers/min per dataset |
## Resources
- [SERP API Docs](https://docs.brightdata.com/scraping-automation/serp-api/overview)
- [Web Scraper API Trigger](https://docs.brightdata.com/scraping-automation/web-data-apis/web-scraper-api/trigger-a-collection)
- [SERP API GitHub](https://github.com/luminati-io/serp-api)
## Next Steps
For common errors, see `brightdata-common-errors`.Related Skills
step-functions-workflow
Step Functions Workflow - Auto-activating skill for AWS Skills. Triggers on: step functions workflow, step functions workflow Part of the AWS Skills skill category.
sprint-workflow
Execute this skill should be used when the user asks about "how sprints work", "sprint phases", "iteration workflow", "convergent development", "sprint lifecycle", "when to use sprints", or wants to understand the sprint execution model and its convergent diffusion approach. Use when appropriate context detected. Trigger with relevant phrases based on skill purpose.
scorecard-marketing
Build quiz and assessment funnels that generate qualified leads at 30-50% conversion. Use when the user mentions "lead magnet", "quiz funnel", "assessment tool", "lead generation", or "score-based segmentation". Covers question design, dynamic results by tier, and automated follow-up sequences. For landing page conversion, see cro-methodology. For full marketing plans, see one-page-marketing. Trigger with 'scorecard', 'marketing'.
n8n-workflow-generator
N8N Workflow Generator - Auto-activating skill for Business Automation. Triggers on: n8n workflow generator, n8n workflow generator Part of the Business Automation skill category.
jira-workflow-creator
Jira Workflow Creator - Auto-activating skill for Enterprise Workflows. Triggers on: jira workflow creator, jira workflow creator Part of the Enterprise Workflows skill category.
building-gitops-workflows
This skill enables Claude to construct GitOps workflows using ArgoCD and Flux. It is designed to generate production-ready configurations, implement best practices, and ensure a security-first approach for Kubernetes deployments. Use this skill when the user explicitly requests "GitOps workflow", "ArgoCD", "Flux", or asks for help with setting up a continuous delivery pipeline using GitOps principles. The skill will generate the necessary configuration files and setup code based on the user's specific requirements and infrastructure.
git-workflow-manager
Git Workflow Manager - Auto-activating skill for DevOps Basics. Triggers on: git workflow manager, git workflow manager Part of the DevOps Basics skill category.
fathom-core-workflow-b
Sync Fathom meeting data to CRM and build automated follow-up workflows. Use when integrating Fathom with Salesforce, HubSpot, or custom CRMs, or creating automated post-meeting email summaries. Trigger with phrases like "fathom crm sync", "fathom salesforce", "fathom follow-up", "fathom post-meeting workflow".
fathom-core-workflow-a
Build a meeting analytics pipeline with Fathom transcripts and summaries. Use when extracting insights from meetings, building CRM sync, or creating automated meeting follow-up workflows. Trigger with phrases like "fathom analytics", "fathom meeting pipeline", "fathom transcript analysis", "fathom action items sync".
exa-core-workflow-b
Execute Exa findSimilar, getContents, answer, and streaming answer workflows. Use when finding pages similar to a URL, retrieving content for known URLs, or getting AI-generated answers with citations. Trigger with phrases like "exa find similar", "exa get contents", "exa answer", "exa similarity search", "findSimilarAndContents".
exa-core-workflow-a
Execute Exa neural search with contents, date filters, and domain scoping. Use when building search features, implementing RAG context retrieval, or querying the web with semantic understanding. Trigger with phrases like "exa search", "exa neural search", "search with exa", "exa searchAndContents", "exa query".
evernote-core-workflow-b
Execute Evernote secondary workflow: Search and Retrieval. Use when implementing search features, finding notes, filtering content, or building search interfaces. Trigger with phrases like "search evernote", "find evernote notes", "evernote search", "query evernote".