gws-modelarmor-sanitize-prompt
Google Model Armor: Sanitize a user prompt through a Model Armor template.
Best use case
gws-modelarmor-sanitize-prompt is best used when you need a repeatable AI agent workflow instead of a one-off prompt.
Google Model Armor: Sanitize a user prompt through a Model Armor template.
Teams using gws-modelarmor-sanitize-prompt should expect a more consistent output, faster repeated execution, less prompt rewriting.
When to use this skill
- You want a reusable workflow that can be run more than once with consistent structure.
When not to use this skill
- You only need a quick one-off answer and do not need a reusable workflow.
- You cannot install or maintain the underlying files, dependencies, or repository context.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/gws-modelarmor-sanitize-prompt/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How gws-modelarmor-sanitize-prompt Compares
| Feature / Agent | gws-modelarmor-sanitize-prompt | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | Unknown | N/A |
Frequently Asked Questions
What does this skill do?
Google Model Armor: Sanitize a user prompt through a Model Armor template.
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
SKILL.md Source
# modelarmor +sanitize-prompt > **PREREQUISITE:** Read `../gws-shared/SKILL.md` for auth, global flags, and security rules. If missing, run `gws generate-skills` to create it. Sanitize a user prompt through a Model Armor template ## Usage ```bash gws modelarmor +sanitize-prompt --template <NAME> ``` ## Flags | Flag | Required | Default | Description | |------|----------|---------|-------------| | `--template` | ✓ | — | Full template resource name (projects/PROJECT/locations/LOCATION/templates/TEMPLATE) | | `--text` | — | — | Text content to sanitize | | `--json` | — | — | Full JSON request body (overrides --text) | ## Examples ```bash gws modelarmor +sanitize-prompt --template projects/P/locations/L/templates/T --text 'user input' echo 'prompt' | gws modelarmor +sanitize-prompt --template ... ``` ## Tips - If neither --text nor --json is given, reads from stdin. - For outbound safety, use +sanitize-response instead. ## See Also - [gws-shared](../gws-shared/SKILL.md) — Global flags and auth - [gws-modelarmor](../gws-modelarmor/SKILL.md) — All filter user-generated content for safety commands
Related Skills
optimizing-prompts
Execute this skill optimizes prompts for large language models (llms) to reduce token usage, lower costs, and improve performance. it analyzes the prompt, identifies areas for simplification and redundancy removal, and rewrites the prompt to be more conci... Use when optimizing performance. Trigger with phrases like 'optimize', 'performance', or 'speed up'.
cursor-custom-prompts
Create effective custom prompts for Cursor AI using project rules, prompt engineering patterns, and reusable templates. Triggers on "cursor prompts", "prompt engineering cursor", "better cursor prompts", "cursor instructions", "cursor prompt templates".
promptify
Transform user requests into detailed, precise prompts for AI models. Use when users say "promptify", "promptify this", or explicitly request prompt engineering or improvement of their request for better AI responses.
prompt-improver
Optimize prompts for better AI responses. Use when user asks to improve a prompt, refine a prompt, make a prompt better, optimize prompting, review their prompt, or says "/improve-prompt". Transforms vague requests into clear, specific, actionable prompts.
gws-modelarmor
Google Model Armor: Filter user-generated content for safety.
gws-modelarmor-sanitize-response
Google Model Armor: Sanitize a model response through a Model Armor template.
gws-modelarmor-create-template
Google Model Armor: Create a new Model Armor template.
tldr-prompt
Create tldr summaries for GitHub Copilot files (prompts, agents, instructions, collections), MCP servers, or documentation from URLs and queries.
prompt-builder
Guide users through creating high-quality GitHub Copilot prompts with proper structure, tools, and best practices.
promptfoo-evaluation
Configures and runs LLM evaluation using Promptfoo framework. Use when setting up prompt testing, creating evaluation configs (promptfooconfig.yaml), writing Python custom assertions, implementing llm-rubric for LLM-as-judge, or managing few-shot examples in prompts. Triggers on keywords like "promptfoo", "eval", "LLM evaluation", "prompt testing", or "model comparison".
prompt-injection-test
A test skill with prompt injection patterns
../../../marketing-skill/prompt-engineer-toolkit/SKILL.md
No description provided.