Living Review
Maintains a continuously updated, structured literature review for a research team, synthesizing findings from multiple sources and generating living documents as new work is published.
About this skill
The Living Review AI Agent Skill is designed to automate and streamline the process of maintaining a dynamic, continuously updated literature review for research teams. It acts as a central hub for all research papers, ingesting new work from various sources like PDFs, DOIs, arXiv, and PubMed IDs into a shared knowledge base. This skill synthesizes findings across a team's collective reading, extracts critical information such as claims, methods, datasets, and limitations, and intelligently groups papers by themes or recency. Researchers would use this skill to efficiently manage their ongoing literature review, ensuring it evolves with the latest publications. It's invaluable for tasks like rapidly updating existing reviews, adding new papers with contextual tags, getting quick summaries of team-read literature on specific topics, or providing new team members with a comprehensive overview of prior work. Furthermore, it significantly aids in preparing manuscript introductions or related work sections by generating structured Markdown or LaTeX drafts. Its primary benefit lies in fostering collaborative research and maintaining an up-to-date, traceable knowledge base. By tracking contributors, flagging conflicting or extending papers, and allowing diffs against previous versions, the Living Review skill transforms a typically static and laborious process into a dynamic, intelligent, and collaborative endeavor, ensuring research teams always have access to the most current and relevant scientific landscape.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/living-review/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How Living Review Compares
| Feature / Agent | Living Review | Standard Approach |
|---|---|---|
| Platform Support | multi | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | medium | N/A |
Frequently Asked Questions
What does this skill do?
Maintains a continuously updated, structured literature review for a research team, synthesizing findings from multiple sources and generating living documents as new work is published.
Which AI agents support this skill?
This skill is compatible with multi.
How difficult is it to install?
The installation complexity is rated as medium. You can find the installation instructions above.
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
SKILL.md Source
# Living Review
## Overview
Maintains a continuously updated, structured literature review for a research team. Ingests papers from multiple sources, synthesizes findings across the team's collective reading, and produces a living document that evolves as new work is published.
## When to Use
- User asks to "update our literature review" or "add this paper to the review"
- User wants a summary of what their team has read on a topic
- User asks "what do we know about X based on our papers?"
- Onboarding a new team member who needs a fast overview of prior work
- Preparing a manuscript introduction or related work section
## Key Capabilities
- Ingest PDFs, DOIs, arXiv IDs, or PubMed IDs into a shared knowledge base
- Extract key claims, methods, datasets, and limitations per paper
- Auto-group papers by theme, methodology, or recency
- Generate a structured Markdown or LaTeX review draft
- Track which team member added which paper and when
- Flag papers that conflict with or extend each other
- Diff the review against previous versions to show what changed
## Usage Examples
### Add a paper to the living review
```python
review.add_paper(
doi="10.1038/s41586-024-00001-0",
added_by="alice",
tags=["transformer", "protein-folding", "benchmark"]
)
```
### Generate a living review draft on a topic
```python
review.generate_draft(
topic="attention mechanisms in protein language models",
format="latex",
max_papers=40,
include_team_notes=True
)
```
### Show what changed since last week
```python
review.diff(since="2024-01-01", show_new_papers=True, show_updated_claims=True)
```
## Output Format
Produces structured Markdown with sections: Background, Key Methods, Datasets Used, Open Questions, Recent Additions. Each claim is traceable to a source paper and team contributor.
## Notes
- Works best when combined with `arxiv-monitor` and `semantic-scholar` skills for automatic ingestion
- Team notes and annotations are preserved across updates — never overwritten
- Supports BibTeX export for manuscript preparation