Best use case
Reddit Read-Only is best used when you need a repeatable AI agent workflow instead of a one-off prompt.
## Overview
Teams using Reddit Read-Only should expect a more consistent output, faster repeated execution, less prompt rewriting.
When to use this skill
- You want a reusable workflow that can be run more than once with consistent structure.
When not to use this skill
- You only need a quick one-off answer and do not need a reusable workflow.
- You cannot install or maintain the underlying files, dependencies, or repository context.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/reddit-readonly/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How Reddit Read-Only Compares
| Feature / Agent | Reddit Read-Only | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | Unknown | N/A |
Frequently Asked Questions
What does this skill do?
## Overview
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
SKILL.md Source
# Reddit Read-Only
## Overview
Browse and search Reddit programmatically using the public JSON API. Fetch posts from subreddits, search for topics, retrieve comment threads, and access trending content. No API key or authentication is needed. All access is read-only.
## Instructions
When a user asks you to browse or search Reddit, follow these steps:
### Step 1: Determine the request type
Identify what the user wants:
- **Browse a subreddit**: Fetch posts from a specific subreddit (hot, new, top, rising)
- **Search Reddit**: Find posts matching a query across Reddit or within a subreddit
- **Read a thread**: Fetch a specific post and its comments
- **Get trending content**: Check what is popular right now
### Step 2: Use the public JSON API
Reddit exposes JSON data by appending `.json` to most URLs:
```python
import requests
import time
from datetime import datetime
HEADERS = {"User-Agent": "reddit-readonly-bot/1.0.0"}
BASE_URL = "https://www.reddit.com"
def get_subreddit_posts(subreddit, sort="hot", time_filter="day", limit=25):
"""Fetch posts from a subreddit.
Args:
subreddit: Subreddit name without r/ prefix
sort: One of 'hot', 'new', 'top', 'rising'
time_filter: For 'top' sort: 'hour', 'day', 'week', 'month', 'year', 'all'
limit: Number of posts (max 100)
"""
url = f"{BASE_URL}/r/{subreddit}/{sort}.json"
params = {"limit": min(limit, 100), "t": time_filter}
response = requests.get(url, headers=HEADERS, params=params, timeout=30)
response.raise_for_status()
time.sleep(1)
posts = []
for child in response.json()["data"]["children"]:
p = child["data"]
posts.append({
"title": p["title"],
"author": p.get("author", "[deleted]"),
"score": p["score"],
"num_comments": p["num_comments"],
"selftext": p.get("selftext", "")[:500],
"url": p.get("url", ""),
"permalink": f"https://reddit.com{p['permalink']}",
"created": datetime.utcfromtimestamp(p["created_utc"]).isoformat(),
"subreddit": p["subreddit"],
})
return posts
def search_reddit(query, subreddit=None, sort="relevance", time_filter="year", limit=25):
"""Search for posts matching a query."""
if subreddit:
url = f"{BASE_URL}/r/{subreddit}/search.json"
params = {"q": query, "sort": sort, "t": time_filter,
"limit": min(limit, 100), "restrict_sr": "on"}
else:
url = f"{BASE_URL}/search.json"
params = {"q": query, "sort": sort, "t": time_filter,
"limit": min(limit, 100)}
response = requests.get(url, headers=HEADERS, params=params, timeout=30)
response.raise_for_status()
time.sleep(1)
posts = []
for child in response.json()["data"]["children"]:
p = child["data"]
posts.append({
"title": p["title"],
"score": p["score"],
"num_comments": p["num_comments"],
"subreddit": p["subreddit"],
"permalink": f"https://reddit.com{p['permalink']}",
"selftext": p.get("selftext", "")[:300],
})
return posts
def get_comments(permalink, sort="top", limit=100):
"""Fetch comments for a post given its permalink path."""
# permalink should be like /r/subreddit/comments/id/title/
url = f"{BASE_URL}{permalink}.json"
params = {"sort": sort, "limit": limit}
response = requests.get(url, headers=HEADERS, params=params, timeout=30)
response.raise_for_status()
time.sleep(1)
comments = []
data = response.json()
if len(data) > 1:
_extract_comments(data[1]["data"]["children"], comments, depth=0)
return comments
def _extract_comments(children, comments, depth):
"""Recursively extract comments from nested structure."""
for child in children:
if child["kind"] != "t1":
continue
c = child["data"]
comments.append({
"body": c["body"],
"author": c.get("author", "[deleted]"),
"score": c["score"],
"depth": depth,
})
# Extract replies
if c.get("replies") and isinstance(c["replies"], dict):
_extract_comments(
c["replies"]["data"]["children"], comments, depth + 1
)
```
### Step 3: Format and present results
Format the output clearly for the user:
**For subreddit browsing:**
```
r/programming - Hot Posts
========================
1. [523 pts | 89 comments] "Why Rust is replacing C++ in embedded systems"
https://reddit.com/r/programming/comments/abc123/...
2. [312 pts | 45 comments] "SQLite internals: How the query planner works"
https://reddit.com/r/programming/comments/def456/...
3. [298 pts | 112 comments] "Ask r/programming: What's your unpopular tech opinion?"
Preview: "I'll start: ORMs cause more problems than they solve..."
https://reddit.com/r/programming/comments/ghi789/...
```
**For comment threads:**
```
Thread: "Why did you switch from VS Code to Neovim?"
r/neovim | 445 pts | 203 comments
Top Comments:
[189 pts] u/vimuser42: "Speed. My VS Code took 8 seconds to open a
large TypeScript project. Neovim opens instantly."
[67 pts] u/reply_user: "Same experience. The LSP integration in
Neovim has gotten so good there's no feature gap anymore."
[145 pts] u/pragmatic_dev: "Honestly, the keybindings. Once you learn
modal editing, going back to click-and-type feels slow."
```
### Step 4: Handle pagination for large requests
```python
def get_all_posts(subreddit, sort="new", limit=500):
"""Fetch multiple pages of posts using pagination."""
all_posts = []
after = None
while len(all_posts) < limit:
url = f"{BASE_URL}/r/{subreddit}/{sort}.json"
params = {"limit": 100}
if after:
params["after"] = after
response = requests.get(url, headers=HEADERS, params=params, timeout=30)
response.raise_for_status()
time.sleep(1)
data = response.json()["data"]
children = data["children"]
if not children:
break
for child in children:
all_posts.append(child["data"])
after = data.get("after")
if not after:
break
return all_posts[:limit]
```
## Examples
### Example 1: Browse top posts in a subreddit
**User request:** "Show me the top posts in r/machinelearning from this week."
**Execution:**
```python
posts = get_subreddit_posts("machinelearning", sort="top", time_filter="week", limit=10)
for i, post in enumerate(posts, 1):
print(f"{i}. [{post['score']} pts] {post['title']}")
print(f" {post['permalink']}")
```
### Example 2: Search for a specific topic
**User request:** "Find Reddit discussions about migrating from MongoDB to PostgreSQL."
**Execution:**
```python
posts = search_reddit(
query="migrate MongoDB to PostgreSQL",
sort="relevance",
time_filter="year",
limit=20
)
```
### Example 3: Read a full comment thread
**User request:** "Read the comments on this Reddit post: https://reddit.com/r/webdev/comments/xyz/..."
**Execution:**
```python
permalink = "/r/webdev/comments/xyz/post_title/"
comments = get_comments(permalink, sort="top", limit=50)
for c in comments:
indent = " " * c["depth"]
print(f"{indent}[{c['score']} pts] u/{c['author']}: {c['body'][:200]}")
```
## Guidelines
- Always include a 1-second delay between requests to avoid being rate-limited by Reddit.
- Set a descriptive User-Agent header. Reddit blocks requests without one and may return 429 errors.
- The public JSON API has a hard limit of 100 items per request. Use the `after` parameter for pagination.
- All access is read-only. This skill cannot post, vote, or modify any Reddit content.
- Truncate long selftext and comment bodies when displaying summaries. Show full text only when the user requests a specific post.
- Handle deleted posts and comments gracefully. Check for `[deleted]` or `[removed]` content.
- Reddit may return 403 or 429 errors during high traffic. Implement retry logic with exponential backoff.
- Respect that some subreddits are private and will return 403 errors. Inform the user and suggest alternatives.
- Do not attempt to access quarantined or NSFW subreddits without the user explicitly requesting it.
- Always provide permalink URLs so the user can visit the original discussion in their browser.Related Skills
thread-dump-analyzer
Thread Dump Analyzer - Auto-activating skill for Performance Testing. Triggers on: thread dump analyzer, thread dump analyzer Part of the Performance Testing skill category.
readme-generator
Readme Generator - Auto-activating skill for DevOps Basics. Triggers on: readme generator, readme generator Part of the DevOps Basics skill category.
reddit-post-writer
Master authentic Reddit content generator using emotion-first, phased architecture. Creates posts that sound genuinely human through cognitive state simulation, not just rule-following. Use when the user asks to write a Reddit post, create Reddit content, or needs help with Reddit engagement. Includes adversarial committee review, Claude-ism detection, and interactive refinement workflow.
gws-gmail-read
Gmail: Read a message and extract its body or headers.
readme-blueprint-generator
Intelligent README.md generation prompt that analyzes project documentation structure and creates comprehensive repository documentation. Scans .github/copilot directory files and copilot-instructions.md to extract project information, technology stack, architecture, development workflow, coding standards, and testing approaches while generating well-structured markdown documentation with proper formatting, cross-references, and developer-focused content.
create-readme
Create a README.md file for the project
twitter-reader
Fetch Twitter/X post content by URL using jina.ai API to bypass JavaScript restrictions. Use when Claude needs to retrieve tweet content including author, timestamp, post text, images, and thread replies. Supports individual posts or batch fetching from x.com or twitter.com URLs.
safe-file-reader
Read files from documents directory safely
deep-reading-analyst
Comprehensive framework for deep analysis of articles, papers, and long-form content using 10+ thinking models (SCQA, 5W2H, critical thinking, inversion, mental models, first principles, systems thinking, six thinking hats). Use when users want to: (1) deeply understand complex articles/content, (2) analyze arguments and identify logical flaws, (3) extract actionable insights from reading materials, (4) create study notes or learning summaries, (5) compare multiple sources, (6) transform knowledge into practical applications, or (7) apply specific thinking frameworks. Triggered by phrases like 'analyze this article,' 'help me understand,' 'deep dive into,' 'extract insights from,' 'use [framework name],' or when users provide URLs/long-form content for analysis.
readme-i18n
Use when the user wants to translate a repository README, make a repo multilingual, localize docs, add a language switcher, internationalize the README, or update localized README variants in a GitHub-style repository.
crafting-effective-readmes
Use when writing or improving README files. Not all READMEs are the same — provides templates and guidance matched to your audience and project type.
screen-reader-testing
Test web applications with screen readers including VoiceOver, NVDA, and JAWS. Use when validating screen reader compatibility, debugging accessibility issues, or ensuring assistive technology support.