performing-timeline-reconstruction-with-plaso
Build comprehensive forensic super-timelines using Plaso (log2timeline) to correlate events across file systems, logs, and artifacts into a unified chronological view.
Best use case
performing-timeline-reconstruction-with-plaso is best used when you need a repeatable AI agent workflow instead of a one-off prompt.
Build comprehensive forensic super-timelines using Plaso (log2timeline) to correlate events across file systems, logs, and artifacts into a unified chronological view.
Teams using performing-timeline-reconstruction-with-plaso should expect a more consistent output, faster repeated execution, less prompt rewriting.
When to use this skill
- You want a reusable workflow that can be run more than once with consistent structure.
When not to use this skill
- You only need a quick one-off answer and do not need a reusable workflow.
- You cannot install or maintain the underlying files, dependencies, or repository context.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/performing-timeline-reconstruction-with-plaso/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How performing-timeline-reconstruction-with-plaso Compares
| Feature / Agent | performing-timeline-reconstruction-with-plaso | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | Unknown | N/A |
Frequently Asked Questions
What does this skill do?
Build comprehensive forensic super-timelines using Plaso (log2timeline) to correlate events across file systems, logs, and artifacts into a unified chronological view.
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
Related Guides
AI Agents for Marketing
Discover AI agents for marketing workflows, from SEO and content production to campaign research, outreach, and analytics.
AI Agents for Startups
Explore AI agent skills for startup validation, product research, growth experiments, documentation, and fast execution with small teams.
AI Agents for Coding
Browse AI agent skills for coding, debugging, testing, refactoring, code review, and developer workflows across Claude, Cursor, and Codex.
SKILL.md Source
# Performing Timeline Reconstruction with Plaso
## When to Use
- When building a comprehensive forensic timeline from multiple evidence sources
- For correlating events across file system metadata, event logs, browser history, and registry
- During complex investigations requiring chronological reconstruction of activities
- When standard log analysis is insufficient to establish the sequence of events
- For presenting investigation findings in a visual, chronological format
## Prerequisites
- Plaso (log2timeline/psort) installed on forensic workstation
- Forensic disk image(s) in raw (dd), E01, or VMDK format
- Sufficient storage for Plaso output (can be 10x+ the image size)
- Minimum 8GB RAM (16GB+ recommended for large images)
- Timeline Explorer (Eric Zimmerman) or Timesketch for visualization
- Understanding of timestamp types (MACB: Modified, Accessed, Changed, Born)
## Workflow
### Step 1: Install Plaso and Prepare the Environment
```bash
# Install Plaso on Ubuntu/Debian
sudo add-apt-repository ppa:gift/stable
sudo apt-get update
sudo apt-get install plaso-tools
# Or install via pip
pip install plaso
# Or use Docker (recommended for dependency isolation)
docker pull log2timeline/plaso
# Verify installation
log2timeline.py --version
psort.py --version
# Create output directory
mkdir -p /cases/case-2024-001/timeline/
# Verify the forensic image
img_stat /cases/case-2024-001/images/evidence.dd
```
### Step 2: Generate the Plaso Storage File with log2timeline
```bash
# Basic processing of a disk image (all parsers)
log2timeline.py \
--storage-file /cases/case-2024-001/timeline/evidence.plaso \
/cases/case-2024-001/images/evidence.dd
# Process with specific parsers for faster targeted analysis
log2timeline.py \
--parsers "winevtx,prefetch,mft,usnjrnl,lnk,recycle_bin,chrome_history,firefox_history,winreg" \
--storage-file /cases/case-2024-001/timeline/evidence.plaso \
/cases/case-2024-001/images/evidence.dd
# Process with a filter file to focus on specific paths
cat << 'EOF' > /cases/case-2024-001/timeline/filter.txt
/Windows/System32/winevt/Logs
/Windows/Prefetch
/Users/*/NTUSER.DAT
/Users/*/AppData/Local/Google/Chrome
/Users/*/AppData/Roaming/Mozilla/Firefox
/$MFT
/$UsnJrnl:$J
/Windows/System32/config
EOF
log2timeline.py \
--filter-file /cases/case-2024-001/timeline/filter.txt \
--storage-file /cases/case-2024-001/timeline/evidence.plaso \
/cases/case-2024-001/images/evidence.dd
# Using Docker
docker run --rm -v /cases:/cases log2timeline/plaso log2timeline \
--storage-file /cases/case-2024-001/timeline/evidence.plaso \
/cases/case-2024-001/images/evidence.dd
# Process multiple evidence sources into one timeline
log2timeline.py \
--storage-file /cases/case-2024-001/timeline/combined.plaso \
/cases/case-2024-001/images/workstation.dd
log2timeline.py \
--storage-file /cases/case-2024-001/timeline/combined.plaso \
/cases/case-2024-001/images/server.dd
```
### Step 3: Filter and Export Timeline with psort
```bash
# Export full timeline to CSV (super-timeline format)
psort.py \
-o l2tcsv \
-w /cases/case-2024-001/timeline/full_timeline.csv \
/cases/case-2024-001/timeline/evidence.plaso
# Export with date range filter (focus on incident window)
psort.py \
-o l2tcsv \
-w /cases/case-2024-001/timeline/incident_window.csv \
/cases/case-2024-001/timeline/evidence.plaso \
"date > '2024-01-15 00:00:00' AND date < '2024-01-20 23:59:59'"
# Export in JSON Lines format (for ingestion into SIEM/Timesketch)
psort.py \
-o json_line \
-w /cases/case-2024-001/timeline/timeline.jsonl \
/cases/case-2024-001/timeline/evidence.plaso
# Export with specific source type filters
psort.py \
-o l2tcsv \
-w /cases/case-2024-001/timeline/registry_events.csv \
/cases/case-2024-001/timeline/evidence.plaso \
"source_short == 'REG'"
psort.py \
-o l2tcsv \
-w /cases/case-2024-001/timeline/evtx_events.csv \
/cases/case-2024-001/timeline/evidence.plaso \
"source_short == 'EVT'"
# Export for Timeline Explorer (dynamic CSV)
psort.py \
-o dynamic \
-w /cases/case-2024-001/timeline/timeline_explorer.csv \
/cases/case-2024-001/timeline/evidence.plaso
```
### Step 4: Analyze Timeline with Timesketch
```bash
# Install Timesketch (Docker deployment)
git clone https://github.com/google/timesketch.git
cd timesketch
docker compose up -d
# Import Plaso file into Timesketch via CLI
timesketch_importer \
--host http://localhost:5000 \
--username analyst \
--password password \
--sketch_id 1 \
--timeline_name "Case 2024-001 Workstation" \
/cases/case-2024-001/timeline/evidence.plaso
# Alternatively, import JSONL
timesketch_importer \
--host http://localhost:5000 \
--username analyst \
--sketch_id 1 \
--timeline_name "Case 2024-001" \
/cases/case-2024-001/timeline/timeline.jsonl
# In Timesketch web UI:
# 1. Search for events: "data_type:windows:evtx:record AND event_identifier:4624"
# 2. Apply Sigma analyzers for automated detection
# 3. Star/tag important events
# 4. Create stories documenting the investigation narrative
# 5. Share with team members
```
### Step 5: Perform Targeted Timeline Analysis
```bash
# Analyze specific time periods around known events
python3 << 'PYEOF'
import csv
from collections import defaultdict
from datetime import datetime
# Load incident window timeline
events_by_hour = defaultdict(list)
source_counts = defaultdict(int)
with open('/cases/case-2024-001/timeline/incident_window.csv', 'r', errors='ignore') as f:
reader = csv.DictReader(f)
total = 0
for row in reader:
total += 1
timestamp = row.get('datetime', row.get('date', ''))
source = row.get('source_short', row.get('source', 'Unknown'))
description = row.get('message', row.get('desc', ''))
source_counts[source] += 1
# Group by hour for activity patterns
try:
dt = datetime.strptime(timestamp[:19], '%Y-%m-%dT%H:%M:%S')
hour_key = dt.strftime('%Y-%m-%d %H:00')
events_by_hour[hour_key].append({
'time': timestamp,
'source': source,
'description': description[:200]
})
except (ValueError, TypeError):
pass
print(f"Total events in incident window: {total}\n")
print("=== EVENTS BY SOURCE TYPE ===")
for source, count in sorted(source_counts.items(), key=lambda x: x[1], reverse=True):
print(f" {source}: {count}")
print("\n=== ACTIVITY BY HOUR ===")
for hour in sorted(events_by_hour.keys()):
count = len(events_by_hour[hour])
bar = '#' * min(count // 10, 50)
print(f" {hour}: {count:>6} events {bar}")
# Find hours with unusual activity spikes
avg = total / max(len(events_by_hour), 1)
print(f"\n=== ANOMALOUS HOURS (>{avg*3:.0f} events) ===")
for hour in sorted(events_by_hour.keys()):
if len(events_by_hour[hour]) > avg * 3:
print(f" {hour}: {len(events_by_hour[hour])} events (SPIKE)")
PYEOF
```
## Key Concepts
| Concept | Description |
|---------|-------------|
| Super-timeline | Unified chronological view combining all artifact timestamps from multiple sources |
| MACB timestamps | Modified, Accessed, Changed (metadata), Born (created) - four key file timestamp types |
| Plaso storage file | SQLite-based intermediate format storing parsed events before export |
| L2T CSV | Log2timeline CSV format with standardized columns for timeline events |
| Parser | Plaso module extracting timestamps from a specific artifact type (e.g., winevtx, prefetch) |
| Psort | Plaso sorting and filtering tool for post-processing storage files |
| Timesketch | Google open-source collaborative timeline analysis platform |
| Pivot points | Known timestamps (e.g., malware execution) used to focus investigation scope |
## Tools & Systems
| Tool | Purpose |
|------|---------|
| log2timeline (Plaso) | Primary timeline generation engine parsing 100+ artifact types |
| psort | Plaso output filtering, sorting, and export utility |
| Timesketch | Web-based collaborative forensic timeline analysis platform |
| Timeline Explorer | Eric Zimmerman's Windows GUI for CSV timeline analysis |
| KAPE | Automated triage collection feeding into Plaso processing |
| mactime (TSK) | Simpler timeline generation from Sleuth Kit bodyfiles |
| Excel/Sheets | Manual timeline review for small filtered datasets |
| Elastic/Kibana | Alternative visualization platform for JSONL timeline data |
## Common Scenarios
**Scenario 1: Ransomware Attack Reconstruction**
Process the full disk image with Plaso, filter to the week before encryption was discovered, identify the initial access vector from browser history and event logs, trace privilege escalation through registry and Prefetch, map lateral movement from network logon events, pinpoint encryption start from MFT timestamps showing mass file modifications.
**Scenario 2: Data Theft Investigation**
Create super-timeline from suspect's workstation, filter for USB device connection events, file access timestamps, and cloud storage browser activity, build a narrative showing data staging, compression, and exfiltration, present timeline to legal team with tagged evidence points.
**Scenario 3: Multi-System Breach Analysis**
Process disk images from all affected systems into a single Plaso storage file, import into Timesketch for collaborative analysis, search for lateral movement patterns across system timelines, identify the patient-zero system and initial compromise vector, map the full attack chain across the environment.
**Scenario 4: Insider Threat After-Hours Activity**
Filter timeline to non-business hours only, identify file access patterns outside normal working times, correlate with authentication events (badge access, VPN logon), search for data access to sensitive directories during these periods, build evidence package for HR/legal.
## Output Format
```
Timeline Reconstruction Summary:
Evidence Sources:
Disk Image: evidence.dd (500 GB, NTFS)
Plaso Storage: evidence.plaso (2.3 GB)
Processing Statistics:
Total events extracted: 4,567,890
Parsers used: 45 (winevtx, prefetch, mft, usnjrnl, lnk, chrome, firefox, winreg, ...)
Processing time: 3h 45m
Incident Window (2024-01-15 to 2024-01-20):
Events in window: 234,567
Event Sources:
MFT: 89,234
Event Logs: 45,678
USN Journal: 56,789
Registry: 23,456
Prefetch: 1,234
Browser: 5,678
LNK Files: 2,345
Other: 10,153
Key Timeline Events:
2024-01-15 14:32 - Phishing email opened (browser)
2024-01-15 14:33 - Malicious document downloaded
2024-01-15 14:35 - PowerShell executed (Prefetch + Event Log)
2024-01-15 14:36 - C2 connection established (Registry + Event Log)
2024-01-16 02:30 - Mimikatz execution (Prefetch)
2024-01-16 02:45 - Lateral movement to DC (Event Log)
2024-01-17 03:00 - Data exfiltration (MFT + USN Journal)
2024-01-18 03:00 - Log clearing (Event Log)
Exported Files:
Full Timeline: /timeline/full_timeline.csv (4.5M rows)
Incident Window: /timeline/incident_window.csv (234K rows)
Timesketch Import: /timeline/timeline.jsonl
```Related Skills
performing-yara-rule-development-for-detection
Develop precise YARA rules for malware detection by identifying unique byte patterns, strings, and behavioral indicators in executable files while minimizing false positives.
performing-wireless-security-assessment-with-kismet
Conduct wireless network security assessments using Kismet to detect rogue access points, hidden SSIDs, weak encryption, and unauthorized clients through passive RF monitoring.
performing-wireless-network-penetration-test
Execute a wireless network penetration test to assess WiFi security by capturing handshakes, cracking WPA2/WPA3 keys, detecting rogue access points, and testing wireless segmentation using Aircrack-ng and related tools.
performing-windows-artifact-analysis-with-eric-zimmerman-tools
Perform comprehensive Windows forensic artifact analysis using Eric Zimmerman's open-source EZ Tools suite including KAPE, MFTECmd, PECmd, LECmd, JLECmd, and Timeline Explorer for parsing registry hives, prefetch files, event logs, and file system metadata.
performing-wifi-password-cracking-with-aircrack
Captures WPA/WPA2 handshakes and performs offline password cracking using aircrack-ng, hashcat, and dictionary attacks during authorized wireless security assessments to evaluate passphrase strength and wireless network security posture.
performing-web-cache-poisoning-attack
Exploiting web cache mechanisms to serve malicious content to other users by poisoning cached responses through unkeyed headers and parameters during authorized security tests.
performing-web-cache-deception-attack
Execute web cache deception attacks by exploiting path normalization discrepancies between CDN caching layers and origin servers to cache and retrieve sensitive authenticated content.
performing-web-application-vulnerability-triage
Triage web application vulnerability findings from DAST/SAST scanners using OWASP risk rating methodology to separate true positives from false positives and prioritize remediation.
performing-web-application-scanning-with-nikto
Nikto is an open-source web server and web application scanner that tests against over 7,000 potentially dangerous files/programs, checks for outdated versions of over 1,250 servers, and identifies ve
performing-web-application-penetration-test
Performs systematic security testing of web applications following the OWASP Web Security Testing Guide (WSTG) methodology to identify vulnerabilities in authentication, authorization, input validation, session management, and business logic. The tester uses Burp Suite as the primary interception proxy alongside manual testing techniques to find flaws that automated scanners miss. Activates for requests involving web app pentest, OWASP testing, application security assessment, or web vulnerability testing.
performing-web-application-firewall-bypass
Bypass Web Application Firewall protections using encoding techniques, HTTP method manipulation, parameter pollution, and payload obfuscation to deliver SQL injection, XSS, and other attack payloads past WAF detection rules.
performing-vulnerability-scanning-with-nessus
Performs authenticated and unauthenticated vulnerability scanning using Tenable Nessus to identify known vulnerabilities, misconfigurations, default credentials, and missing patches across network infrastructure, servers, and applications. The scanner correlates findings with CVE databases and CVSS scores to produce prioritized remediation guidance. Activates for requests involving vulnerability scanning, Nessus assessment, patch compliance checking, or automated vulnerability detection.