twscrape

Python library for scraping Twitter/X data using GraphQL API with account rotation and session management. Use when extracting tweets, user profiles, followers, trends, or building social media monitoring tools.

242 stars

Best use case

twscrape is best used when you need a repeatable AI agent workflow instead of a one-off prompt. It is especially useful for teams working in multi. Python library for scraping Twitter/X data using GraphQL API with account rotation and session management. Use when extracting tweets, user profiles, followers, trends, or building social media monitoring tools.

Python library for scraping Twitter/X data using GraphQL API with account rotation and session management. Use when extracting tweets, user profiles, followers, trends, or building social media monitoring tools.

Users should expect a more consistent workflow output, faster repeated execution, and less time spent rewriting prompts from scratch.

Practical example

Example input

Use the "twscrape" skill to help with this workflow task. Context: Python library for scraping Twitter/X data using GraphQL API with account rotation and session management. Use when extracting tweets, user profiles, followers, trends, or building social media monitoring tools.

Example output

A structured workflow result with clearer steps, more consistent formatting, and an output that is easier to reuse in the next run.

When to use this skill

  • Use this skill when you want a reusable workflow rather than writing the same prompt again and again.

When not to use this skill

  • Do not use this when you only need a one-off answer and do not need a reusable workflow.
  • Do not use it if you cannot install or maintain the related files, repository context, or supporting tools.

Installation

Claude Code / Cursor / Codex

$curl -o ~/.claude/skills/twscrape/SKILL.md --create-dirs "https://raw.githubusercontent.com/aiskillstore/marketplace/main/skills/2025emma/twscrape/SKILL.md"

Manual Installation

  1. Download SKILL.md from GitHub
  2. Place it in .claude/skills/twscrape/SKILL.md inside your project
  3. Restart your AI agent — it will auto-discover the skill

How twscrape Compares

Feature / AgenttwscrapeStandard Approach
Platform SupportNot specifiedLimited / Varies
Context Awareness High Baseline
Installation ComplexityUnknownN/A

Frequently Asked Questions

What does this skill do?

Python library for scraping Twitter/X data using GraphQL API with account rotation and session management. Use when extracting tweets, user profiles, followers, trends, or building social media monitoring tools.

Where can I find the source code?

You can find the source code on GitHub using the link provided at the top of the page.

SKILL.md Source

# twscrape

Python library for scraping Twitter/X data using GraphQL API with account rotation and session management.

## When to use this skill

Use this skill when:
- Working with Twitter/X data extraction and scraping
- Need to bypass Twitter API limitations with account rotation
- Building social media monitoring or analytics tools
- Extracting tweets, user profiles, followers, trends from Twitter/X
- Need async/parallel scraping operations for large-scale data collection
- Looking for alternatives to official Twitter API

## Quick Reference

### Installation

```bash
pip install twscrape
```

### Basic Setup

```python
import asyncio
from twscrape import API, gather

async def main():
    api = API()  # Uses accounts.db by default

    # Add accounts (with cookies - more stable)
    cookies = "abc=12; ct0=xyz"
    await api.pool.add_account("user1", "pass1", "email@example.com", "mail_pass", cookies=cookies)

    # Or add accounts (with login/password - less stable)
    await api.pool.add_account("user2", "pass2", "email2@example.com", "mail_pass2")
    await api.pool.login_all()

asyncio.run(main())
```

### Common Operations

```python
# Search tweets
await gather(api.search("elon musk", limit=20))

# Get user info
await api.user_by_login("xdevelopers")
user = await api.user_by_id(2244994945)

# Get user tweets
await gather(api.user_tweets(user_id, limit=20))
await gather(api.user_tweets_and_replies(user_id, limit=20))
await gather(api.user_media(user_id, limit=20))

# Get followers/following
await gather(api.followers(user_id, limit=20))
await gather(api.following(user_id, limit=20))

# Tweet operations
await api.tweet_details(tweet_id)
await gather(api.retweeters(tweet_id, limit=20))
await gather(api.tweet_replies(tweet_id, limit=20))

# Trends
await gather(api.trends("news"))
```

## Key Features

### 1. Multiple API Support
- **Search API**: Standard Twitter search functionality
- **GraphQL API**: Advanced queries and data extraction
- **Automatic switching**: Based on rate limits and availability

### 2. Async/Await Architecture
```python
# Parallel scraping
async for tweet in api.search("elon musk"):
    print(tweet.id, tweet.user.username, tweet.rawContent)
```

### 3. Account Management
- Add multiple accounts for rotation
- Automatic rate limit handling
- Session persistence across runs
- Email verification support (IMAP or manual)

### 4. Data Models
- SNScrape-compatible models
- Easy conversion to dict/JSON
- Raw API response access available

## Core API Methods

### Search Operations

#### `search(query, limit, kv={})`
Search tweets by query string.

**Parameters:**
- `query` (str): Search query (supports Twitter search syntax)
- `limit` (int): Maximum number of tweets to return
- `kv` (dict): Additional parameters (e.g., `{"product": "Top"}` for Top tweets)

**Returns:** AsyncIterator of Tweet objects

**Example:**
```python
# Latest tweets
async for tweet in api.search("elon musk", limit=20):
    print(tweet.rawContent)

# Top tweets
await gather(api.search("python", limit=20, kv={"product": "Top"}))
```

### User Operations

#### `user_by_login(username)`
Get user information by username.

**Example:**
```python
user = await api.user_by_login("xdevelopers")
print(user.id, user.displayname, user.followersCount)
```

#### `user_by_id(user_id)`
Get user information by user ID.

#### `followers(user_id, limit)`
Get user's followers.

#### `following(user_id, limit)`
Get users that the user follows.

#### `verified_followers(user_id, limit)`
Get only verified followers.

#### `subscriptions(user_id, limit)`
Get user's Twitter Blue subscriptions.

### Tweet Operations

#### `tweet_details(tweet_id)`
Get detailed information about a specific tweet.

#### `tweet_replies(tweet_id, limit)`
Get replies to a tweet.

#### `retweeters(tweet_id, limit)`
Get users who retweeted a specific tweet.

#### `user_tweets(user_id, limit)`
Get tweets from a user (excludes replies).

#### `user_tweets_and_replies(user_id, limit)`
Get tweets and replies from a user.

#### `user_media(user_id, limit)`
Get tweets with media from a user.

### Other Operations

#### `list_timeline(list_id)`
Get tweets from a Twitter list.

#### `trends(category)`
Get trending topics by category.

**Categories:** "news", "sport", "entertainment", etc.

## Account Management

### Adding Accounts

**With cookies (recommended):**
```python
cookies = "abc=12; ct0=xyz"  # String or JSON format
await api.pool.add_account("user", "pass", "email@example.com", "mail_pass", cookies=cookies)
```

**With credentials:**
```python
await api.pool.add_account("user", "pass", "email@example.com", "mail_pass")
await api.pool.login_all()
```

### CLI Account Management

```bash
# Add accounts from file
twscrape add_accounts accounts.txt username:password:email:email_password

# Login all accounts
twscrape login_accounts

# Manual email verification
twscrape login_accounts --manual

# List accounts and status
twscrape accounts

# Re-login specific accounts
twscrape relogin user1 user2

# Retry failed logins
twscrape relogin_failed
```

## Proxy Configuration

### Per-Account Proxy
```python
proxy = "http://login:pass@example.com:8080"
await api.pool.add_account("user", "pass", "email@example.com", "mail_pass", proxy=proxy)
```

### Global Proxy
```python
api = API(proxy="http://login:pass@example.com:8080")
```

### Environment Variable
```bash
export TWS_PROXY=socks5://user:pass@127.0.0.1:1080
twscrape search "elon musk"
```

### Dynamic Proxy Changes
```python
api.proxy = "socks5://user:pass@127.0.0.1:1080"
doc = await api.user_by_login("elonmusk")
api.proxy = None  # Disable proxy
```

**Priority:** `api.proxy` > `TWS_PROXY` env var > account-specific proxy

## CLI Usage

### Search Operations
```bash
twscrape search "QUERY" --limit=20
twscrape search "elon musk lang:es" --limit=20 > data.txt
twscrape search "python" --limit=20 --raw  # Raw API responses
```

### User Operations
```bash
twscrape user_by_login USERNAME
twscrape user_by_id USER_ID
twscrape followers USER_ID --limit=20
twscrape following USER_ID --limit=20
twscrape verified_followers USER_ID --limit=20
twscrape user_tweets USER_ID --limit=20
```

### Tweet Operations
```bash
twscrape tweet_details TWEET_ID
twscrape tweet_replies TWEET_ID --limit=20
twscrape retweeters TWEET_ID --limit=20
```

### Trends
```bash
twscrape trends sport
twscrape trends news
```

### Custom Database
```bash
twscrape --db custom-accounts.db <command>
```

## Advanced Usage

### Raw API Responses
```python
async for response in api.search_raw("elon musk"):
    print(response.status_code, response.json())
```

### Stopping Iteration
```python
from contextlib import aclosing

async with aclosing(api.search("elon musk")) as gen:
    async for tweet in gen:
        if tweet.id < 200:
            break
```

### Convert Models to Dict/JSON
```python
user = await api.user_by_id(user_id)
user_dict = user.dict()
user_json = user.json()
```

### Enable Debug Logging
```python
from twscrape.logger import set_log_level
set_log_level("DEBUG")
```

## Environment Variables

- **`TWS_PROXY`**: Global proxy for all accounts
  Example: `socks5://user:pass@127.0.0.1:1080`

- **`TWS_WAIT_EMAIL_CODE`**: Timeout for email verification (default: 30 seconds)

- **`TWS_RAISE_WHEN_NO_ACCOUNT`**: Raise exception when no accounts available instead of waiting
  Values: `false`, `0`, `true`, `1` (default: `false`)

## Rate Limits & Limitations

### Rate Limits
- Rate limits reset **every 15 minutes** per endpoint
- Each account has **separate limits** for different operations
- Accounts automatically rotate when limits are reached

### Tweet Limits
- `user_tweets` and `user_tweets_and_replies` return approximately **3,200 tweets maximum** per user
- This is a Twitter/X platform limitation

### Account Status
- Rate limits vary based on:
  - Account age
  - Account verification status
  - Account activity history

### Handling Rate Limits
The library automatically:
- Switches to next available account
- Waits for rate limit reset if all accounts exhausted
- Tracks rate limit status per endpoint

## Common Patterns

### Large-Scale Data Collection
```python
async def collect_user_data(username):
    user = await api.user_by_login(username)

    # Collect tweets
    tweets = await gather(api.user_tweets(user.id, limit=100))

    # Collect followers
    followers = await gather(api.followers(user.id, limit=100))

    # Collect following
    following = await gather(api.following(user.id, limit=100))

    return {
        'user': user,
        'tweets': tweets,
        'followers': followers,
        'following': following
    }
```

### Search with Filters
```python
# Language filter
await gather(api.search("python lang:en", limit=20))

# Date filter
await gather(api.search("AI since:2024-01-01", limit=20))

# From specific user
await gather(api.search("from:elonmusk", limit=20))

# With media
await gather(api.search("cats filter:media", limit=20))
```

### Batch Processing
```python
async def process_users(usernames):
    tasks = []
    for username in usernames:
        task = api.user_by_login(username)
        tasks.append(task)

    users = await asyncio.gather(*tasks)
    return users
```

## Troubleshooting

### Login Issues
- **Use cookies instead of credentials** for more stable authentication
- Enable **manual email verification** with `--manual` flag
- Check **email password** is correct for IMAP access

### Rate Limit Problems
- **Add more accounts** for better rotation
- **Increase wait time** between requests
- **Monitor account status** with `twscrape accounts`

### No Data Returned
- **Check account status** - they may be suspended or rate limited
- **Verify query syntax** - use Twitter search syntax
- **Try different accounts** - some may have better access

### Connection Issues
- **Configure proxy** if behind firewall
- **Check network connectivity**
- **Verify Twitter/X is accessible** from your location

## Resources

- **GitHub Repository**: https://github.com/vladkens/twscrape
- **Installation**: `pip install twscrape`
- **Development Version**: `pip install git+https://github.com/vladkens/twscrape.git`

## References

For detailed API documentation and examples, see the reference files in the `references/` directory:

- `references/installation.md` - Installation and setup
- `references/api_methods.md` - Complete API method reference
- `references/account_management.md` - Account configuration and management
- `references/cli_usage.md` - Command-line interface guide
- `references/proxy_config.md` - Proxy configuration options
- `references/examples.md` - Code examples and patterns

---

**Repository**: https://github.com/vladkens/twscrape
**Stars**: 1998+
**Language**: Python
**License**: MIT

Related Skills

azure-quotas

242
from aiskillstore/marketplace

Check/manage Azure quotas and usage across providers. For deployment planning, capacity validation, region selection. WHEN: "check quotas", "service limits", "current usage", "request quota increase", "quota exceeded", "validate capacity", "regional availability", "provisioning limits", "vCPU limit", "how many vCPUs available in my subscription".

DevOps & Infrastructure

raindrop-io

242
from aiskillstore/marketplace

Manage Raindrop.io bookmarks with AI assistance. Save and organize bookmarks, search your collection, manage reading lists, and organize research materials. Use when working with bookmarks, web research, reading lists, or when user mentions Raindrop.io.

Data & Research

zlibrary-to-notebooklm

242
from aiskillstore/marketplace

自动从 Z-Library 下载书籍并上传到 Google NotebookLM。支持 PDF/EPUB 格式,自动转换,一键创建知识库。

discover-skills

242
from aiskillstore/marketplace

当你发现当前可用的技能都不够合适(或用户明确要求你寻找技能)时使用。本技能会基于任务目标和约束,给出一份精简的候选技能清单,帮助你选出最适配当前任务的技能。

web-performance-seo

242
from aiskillstore/marketplace

Fix PageSpeed Insights/Lighthouse accessibility "!" errors caused by contrast audit failures (CSS filters, OKLCH/OKLAB, low opacity, gradient text, image backgrounds). Use for accessibility-driven SEO/performance debugging and remediation.

project-to-obsidian

242
from aiskillstore/marketplace

将代码项目转换为 Obsidian 知识库。当用户提到 obsidian、项目文档、知识库、分析项目、转换项目 时激活。 【激活后必须执行】: 1. 先完整阅读本 SKILL.md 文件 2. 理解 AI 写入规则(默认到 00_Inbox/AI/、追加式、统一 Schema) 3. 执行 STEP 0: 使用 AskUserQuestion 询问用户确认 4. 用户确认后才开始 STEP 1 项目扫描 5. 严格按 STEP 0 → 1 → 2 → 3 → 4 顺序执行 【禁止行为】: - 禁止不读 SKILL.md 就开始分析项目 - 禁止跳过 STEP 0 用户确认 - 禁止直接在 30_Resources 创建(先到 00_Inbox/AI/) - 禁止自作主张决定输出位置

obsidian-helper

242
from aiskillstore/marketplace

Obsidian 智能笔记助手。当用户提到 obsidian、日记、笔记、知识库、capture、review 时激活。 【激活后必须执行】: 1. 先完整阅读本 SKILL.md 文件 2. 理解 AI 写入三条硬规矩(00_Inbox/AI/、追加式、白名单字段) 3. 按 STEP 0 → STEP 1 → ... 顺序执行 4. 不要跳过任何步骤,不要自作主张 【禁止行为】: - 禁止不读 SKILL.md 就开始工作 - 禁止跳过用户确认步骤 - 禁止在非 00_Inbox/AI/ 位置创建新笔记(除非用户明确指定)

internationalizing-websites

242
from aiskillstore/marketplace

Adds multi-language support to Next.js websites with proper SEO configuration including hreflang tags, localized sitemaps, and language-specific content. Use when adding new languages, setting up i18n, optimizing for international SEO, or when user mentions localization, translation, multi-language, or specific languages like Japanese, Korean, Chinese.

google-official-seo-guide

242
from aiskillstore/marketplace

Official Google SEO guide covering search optimization, best practices, Search Console, crawling, indexing, and improving website search visibility based on official Google documentation

github-release-assistant

242
from aiskillstore/marketplace

Generate bilingual GitHub release documentation (README.md + README.zh.md) from repo metadata and user input, and guide release prep with git add/commit/push. Use when the user asks to write or polish README files, create bilingual docs, prepare a GitHub release, or mentions release assistant/README generation.

doc-sync-tool

242
from aiskillstore/marketplace

自动同步项目中的 Agents.md、claude.md 和 gemini.md 文件,保持内容一致性。支持自动监听和手动触发。

deploying-to-production

242
from aiskillstore/marketplace

Automate creating a GitHub repository and deploying a web project to Vercel. Use when the user asks to deploy a website/app to production, publish a project, or set up GitHub + Vercel deployment.