skill-safe-install-l0-strict
Strict secure-install workflow for ClawHub/OpenClaw skills. Use when asked to install a skill safely, inspect skill permissions, review third-party skill risk, or run a pre-install security audit. Enforce full review + sandbox + explicit consent gates, with no author-based trust bypass.
About this skill
This "Skill Safe Install (L0 Strict)" agent skill provides a critical security layer for the ClawHub/OpenClaw ecosystem. Its primary function is to guide an AI agent through a rigorous, auditable process for installing new skills, especially those from third-party sources. The skill is designed to prevent accidental or risky installations by mandating a comprehensive security review, requiring sandbox verification, and securing explicit user confirmation for any sensitive actions. A core principle is to eliminate all forms of implicit trust, meaning no author-based bypasses or automatic permissions are granted, ensuring every step is fully vetted. This skill outlines a detailed workflow from confirming the target skill to mandatory security inspection. It includes checks for existing installations and trust states, and emphasizes a thorough review of maintainer information, required secrets, network access, command execution risks, and persistence behaviors. By adhering to non-negotiable rules like never skipping steps and never auto-trusting, it significantly reduces the attack surface and potential for malicious skill deployment within an AI agent's environment.
Best use case
The primary use case is for AI agents and their users who need to install new capabilities but prioritize security and risk mitigation above all else. It is invaluable for organizations, developers, or individual users integrating third-party AI skills, where verifying the integrity and safety of new tools is paramount to maintaining a secure and controlled operational environment.
Strict secure-install workflow for ClawHub/OpenClaw skills. Use when asked to install a skill safely, inspect skill permissions, review third-party skill risk, or run a pre-install security audit. Enforce full review + sandbox + explicit consent gates, with no author-based trust bypass.
A safely and thoroughly vetted skill installation process, confirmed by the user, with a clear understanding of the skill's permissions and potential risks, significantly reducing the chance of unwanted side effects.
Practical example
Example input
Agent, please perform a safe install of the 'code-formatter' skill from ClawHub.
Example output
Initiating L0 Strict Secure Install for 'code-formatter'. Preliminary security review complete: [summary of findings]. Requires sandbox verification and explicit consent. Proceed? (Y/N)
When to use this skill
- When installing any third-party or unknown ClawHub/OpenClaw skill.
- When a strict security audit or risk assessment is required before skill deployment.
- When inspecting skill permissions and potential system access.
- When needing explicit user consent for sensitive installation actions.
When not to use this skill
- When installing pre-vetted, first-party, or internal skills with established trust.
- When speed of installation is the absolute highest priority and security checks can be deferred (though not recommended).
- When the user explicitly understands and accepts the risks of a direct install without audit.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/skill-safe-install-l0-strict/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How skill-safe-install-l0-strict Compares
| Feature / Agent | skill-safe-install-l0-strict | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | easy | N/A |
Frequently Asked Questions
What does this skill do?
Strict secure-install workflow for ClawHub/OpenClaw skills. Use when asked to install a skill safely, inspect skill permissions, review third-party skill risk, or run a pre-install security audit. Enforce full review + sandbox + explicit consent gates, with no author-based trust bypass.
How difficult is it to install?
The installation complexity is rated as easy. You can find the installation instructions above.
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
Related Guides
AI Agents for Coding
Browse AI agent skills for coding, debugging, testing, refactoring, code review, and developer workflows across Claude, Cursor, and Codex.
Best AI Skills for Claude
Explore the best AI skills for Claude and Claude Code across coding, research, workflow automation, documentation, and agent operations.
Cursor vs Codex for AI Workflows
Compare Cursor and Codex for AI coding workflows, repository assistance, debugging, refactoring, and reusable developer skills.
SKILL.md Source
# Skill Safe Install (L0 Strict) Enforce a conservative, auditable install workflow. ## Purpose Use this skill to reduce accidental or risky third-party skill installs: - Force risk review before installation. - Require sandbox verification before formal install. - Require explicit user confirmation before sensitive actions. - Avoid hidden trust escalation (no author-based bypass, no implicit allowBundled writes). ## Non-negotiable rules 1. Never skip steps. 2. Never auto-trust by author, popularity, or “official-looking” name. 3. Never modify persistent config (`openclaw.json`) without explicit user consent in the current conversation. 4. If risk cannot be evaluated, treat as high risk and pause. ## Workflow (Step 0 → Step 6) ### Step 0 — Confirm target - Resolve exact skill slug and (if available) version. - If input is ambiguous, ask for confirmation before install. Suggested checks: - `clawhub search <query>` - Verify exact slug/version from results. ### Step 1 — Duplicate/state check - Check whether the skill is already installed. - Check current trust state (whether already in `skills.allowBundled`). Suggested checks: - `clawhub list` - Read `~/.openclaw/openclaw.json` (or platform-equivalent config path) ### Step 2 — Mandatory security review (no whitelist bypass) Run inspect and summarize at least: 1. Maintainer/source and recent update signal 2. Required secrets/credentials (API keys, OAuth, tokens) 3. Network/system access scope 4. Command execution or file-system mutation risk 5. Persistence behavior (config edits, auto-run, always-on behavior) Suggested check: - `clawhub inspect <skill>` #### Risk rating rubric - **LOW**: Text/process guidance only, no credentials, no system mutation. - **MEDIUM**: Requires limited credentials or external API access with clear scope. - **HIGH**: Broad command execution, config mutation, or multi-system OAuth. - **CRITICAL**: Destructive capability, privilege escalation, stealth persistence, or unclear behavior. #### Gate policy - LOW / MEDIUM: Continue to sandbox. - HIGH: Continue only after explicit confirmation. - CRITICAL: Do not install by default; require explicit override and warn strongly. ### Step 3 — Sandbox install (isolated workdir) Install in a temporary isolated directory first. - Use isolated workdir (do not install to primary skill directory yet). - Confirm install result and basic behavior. - If sandbox fails, stop. Example pattern: - `clawhub --workdir <temp_dir> --dir skills install <skill>` ### Step 4 — User confirmation checkpoint Before formal install, present: - Chosen skill slug/version - Risk rating + top risks - Sandbox result - Exact next action Proceed only after explicit “yes/install/继续”. ### Step 5 — Formal install Run formal install only after Step 4 consent. Example: - `clawhub install <skill>` If install fails, stop and report error + rollback advice. ### Step 6 — Optional trust persistence (`allowBundled`) Default is **do not write** trust list. Only perform this step when user explicitly asks to persist trust. Required safeguards: 1. Backup config with timestamp. 2. Show exactly what key will change (`skills.allowBundled`). 3. Append skill slug only if absent (idempotent). 4. Confirm backup path and rollback command. Do not use hidden or implicit trust writes. ## Output format (required) - `[Step 0/6] Target: ...` - `[Step 1/6] State: ...` - `[Step 2/6] Review: risk=LOW|MEDIUM|HIGH|CRITICAL; findings=...` - `[Step 3/6] Sandbox: pass|fail` - `[Step 4/6] Consent: pending|approved|denied` - `[Step 5/6] Install: pass|fail` - `[Step 6/6] Trust write: skipped|pending|written` ## Refusal conditions Stop and ask for confirmation/override when any condition is met: - Skill identity is ambiguous. - Inspect output is unavailable or incomplete. - Risk is HIGH/CRITICAL and user has not explicitly approved. - Requested config mutation lacks explicit consent.
Related Skills
AI Safety Audit
Comprehensive AI safety and alignment audit framework for businesses deploying AI agents. Built around the UK AI Security Institute Alignment Project standards (2026), EU AI Act requirements, and NIST AI RMF.
HIPAA Compliance for AI Agents
Generate HIPAA compliance checklists, risk assessments, and audit frameworks for healthcare organizations deploying AI agents.
Data Governance Framework
Assess, score, and remediate your organization's data governance posture across 6 domains.
Cybersecurity Risk Assessment
You are a cybersecurity risk assessment specialist. When the user needs a security audit, threat assessment, or compliance review, follow this framework.
afrexai-cybersecurity-engine
Complete cybersecurity assessment, threat modeling, and hardening system. Use when conducting security audits, threat modeling, penetration testing, incident response, or building security programs from scratch. Works with any stack — zero external dependencies.
Compliance & Audit Readiness Engine
Your AI compliance officer. Guides startups and scale-ups through SOC 2, ISO 27001, GDPR, HIPAA, and PCI DSS — from zero to audit-ready. No consultants needed.
Compliance Audit Generator
Run internal compliance audits against major frameworks without hiring a consultant.
clickhouse-github-forensics
Query GitHub event data via ClickHouse for supply chain investigations, actor profiling, and anomaly detection. Use when investigating GitHub-based attacks, tracking repository activity, analyzing actor behavior patterns, detecting tag/release tampering, or reconstructing incident timelines from public GitHub data. Triggers on GitHub supply chain attacks, repo compromise investigations, actor attribution, tag poisoning, or "query github events".
security-guardian
Automated security auditing for OpenClaw projects. Scans for hardcoded secrets (API keys, tokens) and container vulnerabilities (CVEs) using Trivy. Provides structured reports to help maintain a clean and secure codebase.
mema-vault
Secure credential manager using AES-256 (Fernet) encryption. Stores, retrieves, and rotates secrets using a mandatory Master Key. Use for managing API keys, database credentials, and other sensitive tokens.
guardian-wall
Mitigate prompt injection attacks, especially indirect ones from external web content or files. Use this skill when processing untrusted text from the internet, user-uploaded files, or any external source to sanitize content and detect malicious instructions (e.g., "ignore previous instructions", "system override").
SX-security-audit
全方位安全审计技能。检查文件权限、环境变量、依赖漏洞、配置文件、网络端口、Git 安全、Shell 安全、macOS 安全、密钥检测等。支持 CLI 参数、JSON 输出、配置文件。当用户要求"安全检查"、"漏洞扫描"、"权限检查"、"安全审计"时使用此技能。