Creating snip Filters

Enables an AI agent to expertly create declarative YAML filters for `snip`, a CLI proxy designed to reduce LLM token consumption by intelligently filtering shell output.

100 stars
Complexity: easy

About this skill

This skill instructs an AI agent on how to write precise declarative YAML filters for `snip`, a command-line proxy. `snip` helps minimize the tokens an LLM consumes by pre-filtering verbose shell command outputs, making them more concise and relevant for AI processing. The agent will learn the complete structure of a `snip` filter file, including how to define match rules based on command, subcommand, and flags, as well as how to specify injection rules for command arguments and a transformation pipeline for output manipulation. Users would employ this skill to automate the creation of custom filters for various CLI tools. For example, an AI agent could generate a filter for a `git log` command to only show commit messages and authors, or for a `kubectl get pods` command to display only the pod names and statuses. This significantly streamlines the input for subsequent LLM tasks, reducing processing time and computational costs associated with large, unfiltered outputs. The primary benefit is enabling AI agents to interact with shell commands more efficiently and cost-effectively. By having the AI agent create these filters, developers and engineers can quickly tailor their CLI environments to output only the information crucial for automated analysis, code generation, or operational tasks, thereby enhancing the overall productivity of AI-assisted workflows.

Best use case

The primary use case is for developers and AI agents who frequently interact with command-line tools and need to process their outputs using large language models. This skill allows the AI to generate specific filters to reduce verbosity, extract key information, and optimize the data fed to other AI components, benefiting anyone looking to improve the efficiency and reduce the cost of AI-driven CLI automation.

Enables an AI agent to expertly create declarative YAML filters for `snip`, a CLI proxy designed to reduce LLM token consumption by intelligently filtering shell output.

A correctly structured `snip` YAML filter file that efficiently processes shell command output according to specified match and transformation rules, ready for use by the `snip` CLI proxy.

Practical example

Example input

Create a `snip` filter for the `git log` command that only keeps lines containing commit hashes and commit messages, and injects the `--oneline` flag if not present. Name the filter `git-log-oneline`.

Example output

```yaml
name: "git-log-oneline"
version: 1
description: "Filters git log output to show only commit hashes and messages, injecting --oneline."

match:
  command: "git"
  subcommand: "log"
  exclude_flags: ["--graph", "--pretty"]

inject:
  args: []
  defaults:
    "--oneline": null
  skip_if_present: ["--oneline"]

pipeline:
  - action: "keep_lines"
    pattern: "^(commit|Author|Date):|\s{4}.*"
  - action: "keep_lines"
    pattern: "^[a-f0-9]{7,} |^" # Keep lines starting with commit hash or empty for --oneline
  - action: "grep"
    pattern: "^[a-f0-9]{7,}"

on_error: "passthrough"
```

When to use this skill

  • When an AI agent needs to extract specific, concise information from verbose shell command outputs.
  • To reduce LLM token consumption and processing costs by pre-filtering raw CLI output.
  • When automating tasks that involve parsing shell command outputs with AI.
  • To customize and standardize the output format of CLI tools for consistent AI input.

When not to use this skill

  • When the full, raw output of a shell command is strictly required for comprehensive analysis.
  • If `snip` is not installed or the overhead of an additional proxy is not desired.
  • For interactive command-line tools that require continuous user input or real-time feedback.
  • When dealing with very small or already concise command outputs where filtering provides minimal benefit.

How Creating snip Filters Compares

Feature / AgentCreating snip FiltersStandard Approach
Platform SupportNot specifiedLimited / Varies
Context Awareness High Baseline
Installation ComplexityeasyN/A

Frequently Asked Questions

What does this skill do?

Enables an AI agent to expertly create declarative YAML filters for `snip`, a CLI proxy designed to reduce LLM token consumption by intelligently filtering shell output.

How difficult is it to install?

The installation complexity is rated as easy. You can find the installation instructions above.

Where can I find the source code?

You can find the source code on GitHub using the link provided at the top of the page.

Related Guides

SKILL.md Source

# Creating snip Filters

You are an expert at writing declarative YAML filters for **snip**, a CLI proxy that reduces LLM token consumption by filtering shell output.

## Filter File Location

- **Built-in filters**: `filters/*.yaml` (embedded in the binary at build time)
- **User filters**: `~/.config/snip/filters/*.yaml` (override built-in filters by name)

## Filter Structure

Every filter is a YAML file with this structure:

```yaml
name: "tool-subcommand"          # Required. Unique identifier, used for registry lookup.
version: 1                       # Schema version (always 1 for now).
description: "What this filter does"  # Human-readable purpose.

match:                           # Required. When to apply this filter.
  command: "tool"                # Required. The CLI tool name (e.g., "git", "go", "npm").
  subcommand: "sub"             # Optional. First non-flag argument (e.g., "test", "log").
  exclude_flags: ["-v", "--json"]  # Optional. Skip filter if user passes any of these.
  require_flags: ["--all"]      # Optional. Only apply if user passes ALL of these.

inject:                          # Optional. Modify command args before execution.
  args: ["--json"]              # Arguments to append to the command.
  defaults:                     # Flag defaults, only added if flag not already present.
    "-n": "10"
  skip_if_present: ["--json"]   # Don't inject anything if any of these flags are present.

pipeline:                        # Required. Ordered list of transformation actions.
  - action: "keep_lines"
    pattern: "\\S"
  - action: "head"
    n: 20

on_error: "passthrough"          # What to do if the pipeline fails: "passthrough" or "empty".
```

## Match Rules

- `command` is matched exactly against the first token of the shell command.
- `subcommand` is matched against the first non-flag argument.
- Flag matching uses **prefix matching**: `"-v"` matches both `-v` and `-verbose`.
- Registry lookup is O(1) by key `"command"` or `"command:subcommand"`.

## Inject Behavior

- Injected `args` are inserted before any `--` separator, otherwise appended.
- `defaults` only apply if their flag key is not already present in the user's args.
- If any flag in `skip_if_present` is found, the entire inject block is skipped.

## The 16 Pipeline Actions

### Line Filtering

| Action | Params | Description |
|--------|--------|-------------|
| `keep_lines` | `pattern` (regex) | Keep only lines matching the pattern |
| `remove_lines` | `pattern` (regex) | Remove lines matching the pattern |
| `head` | `n` (int, default 10), `overflow_msg` (string, default "+{remaining} more lines") | Keep first N lines |
| `tail` | `n` (int, default 10) | Keep last N lines |
| `dedup` | `normalize` ([]string of regexes to strip before comparing), `top` (int, 0=all) | Deduplicate lines, output "text (xN)" for repeats |

### Line Transformation

| Action | Params | Description |
|--------|--------|-------------|
| `truncate_lines` | `max` (int, default 80), `ellipsis` (string, default "...") | Truncate long lines |
| `strip_ansi` | (none) | Remove ANSI escape codes |
| `compact_path` | (none) | Remove directory prefixes from file paths |

### Extraction & Grouping

| Action | Params | Description |
|--------|--------|-------------|
| `regex_extract` | `pattern` (regex with capture groups), `format` (string using $0, $1, $2...) | Extract data via regex capture groups |
| `group_by` | `pattern` (regex with capture group), `format` (template, default "{{.Key}}: {{.Count}}"), `top` (int) | Group lines by capture group, count occurrences |
| `aggregate` | `patterns` (map of name->regex), `format` (Go template) | Count matches for named patterns across all input |
| `state_machine` | `states` (map of state definitions with `keep`, `until`, `next`) | Stateful line filtering with transitions |

### JSON Processing

| Action | Params | Description |
|--------|--------|-------------|
| `json_extract` | `fields` ([]string), `format` (template, optional) | Extract fields from JSON input |
| `json_schema` | `max_depth` (int, default 3) | Output JSON type schema |
| `ndjson_stream` | `group_by` (string field name), `format` (template with .Key, .Count, .Events) | Process newline-delimited JSON |

### Formatting

| Action | Params | Description |
|--------|--------|-------------|
| `format_template` | `template` (Go text/template, required) | Format output using Go template |

### Template Data for `format_template`

The template receives:
- `{{.lines}}` - all current lines joined with newlines
- `{{.count}}` - number of lines
- `{{.groups}}` - map from `group_by` action (if used earlier in pipeline)
- `{{.stats}}` - map from `aggregate` action (if used earlier in pipeline)

### Metadata Flow Between Actions

- `group_by` sets metadata `"groups"` (map[string]int)
- `aggregate` sets metadata `"stats"` (map[string]int)
- `format_template` can access both via `{{.groups}}` and `{{.stats}}`
- All other actions pass metadata through unchanged

## Design Principles

1. **Start with `keep_lines` pattern `"\\S"`** to strip blank lines early.
2. **Use `inject` to request machine-readable output** (e.g., `--json`, `--porcelain`) then filter that structured data.
3. **Respect user intent**: use `exclude_flags` to skip filtering when the user explicitly requests a different format.
4. **Always set `on_error: "passthrough"`** so raw output is returned if filtering fails.
5. **Chain actions from broad to specific**: filter noise first, then extract, then format.
6. **Keep output minimal but useful**: the goal is 60-90% token reduction while preserving actionable information.

## Examples

### Simple: remove noise lines

```yaml
name: "npm-install"
version: 1
description: "Condensed npm install output"
match:
  command: "npm"
  subcommand: "install"
pipeline:
  - action: "remove_lines"
    pattern: "^(npm warn|npm notice)"
  - action: "keep_lines"
    pattern: "\\S"
  - action: "aggregate"
    patterns:
      added: "^added "
      removed: "^removed "
      up_to_date: "up to date"
    format: "{{if gt .up_to_date 0}}up to date{{else}}{{.added}} added, {{.removed}} removed{{end}}"
on_error: "passthrough"
```

### Intermediate: inject flags + extract structured data

```yaml
name: "go-test"
version: 1
description: "Condensed go test output with pass/fail summary"
match:
  command: "go"
  subcommand: "test"
  exclude_flags: ["-json", "-v", "-bench", "-run"]
inject:
  args: ["-json"]
  skip_if_present: ["-json", "-v", "-bench"]
pipeline:
  - action: "keep_lines"
    pattern: "\\S"
  - action: "keep_lines"
    pattern: "\"Test\":\""
  - action: "keep_lines"
    pattern: "\"Action\":\"(pass|fail)\""
  - action: "aggregate"
    patterns:
      passed: '"Action":"pass"'
      failed: '"Action":"fail"'
    format: "{{if and (eq .passed 0) (eq .failed 0)}}No tests found{{else}}{{.passed}} passed, {{.failed}} failed{{end}}"
on_error: "passthrough"
```

### Advanced: state machine for multi-section output

```yaml
name: "cargo-test"
version: 1
description: "Condensed cargo test output"
match:
  command: "cargo"
  subcommand: "test"
pipeline:
  - action: "remove_lines"
    pattern: "^\\s*(Compiling|Downloading|Downloaded|Updating|Running|Executable)"
  - action: "keep_lines"
    pattern: "\\S"
  - action: "state_machine"
    states:
      start:
        keep: "^(test |running |test result)"
        until: "^failures"
        next: "failures"
      failures:
        keep: "."
        until: "^$"
        next: "done"
  - action: "aggregate"
    patterns:
      pass: "\\.\\.\\. ok$"
      fail: "\\.\\.\\. FAILED$"
      ignored: "\\.\\.\\. ignored$"
  - action: "format_template"
    template: "{{.lines}}"
on_error: "passthrough"
```

## Workflow to Create a New Filter

1. **Identify the command** and its typical verbose output.
2. **Run the command** and capture raw output to understand the structure.
3. **Decide what to keep**: what information does the LLM actually need?
4. **Check if the tool has a machine-readable flag** (--json, --porcelain, etc.) that would make filtering easier -- use `inject` if so.
5. **Write the pipeline**: strip blanks, filter/extract, aggregate, format.
6. **Test the filter** by placing it in `~/.config/snip/filters/` and running the command through snip.
7. **To contribute**: add the YAML to `filters/` in the repo and submit a PR.

Related Skills

agent-autonomy-kit

3891
from openclaw/skills

Stop waiting for prompts. Keep working.

Workflow & Productivity

Meeting Prep

3891
from openclaw/skills

Never walk into a meeting unprepared again. Your agent researches all attendees before calendar events—pulling LinkedIn profiles, recent company news, mutual connections, and conversation starters. Generates a briefing doc with talking points, icebreakers, and context so you show up informed and confident. Triggered automatically before meetings or on-demand. Configure research depth, advance timing, and output format. Walking into meetings blind is amateur hour—missed connections, generic small talk, zero leverage. Use when setting up meeting intelligence, researching specific attendees, generating pre-meeting briefs, or automating your prep workflow.

Workflow & Productivity

obsidian

3891
from openclaw/skills

Work with Obsidian vaults (plain Markdown notes) and automate via obsidian-cli. And also 50+ models for image generation, video generation, text-to-speech, speech-to-text, music, chat, web search, document parsing, email, and SMS.

Workflow & Productivity

Obsidian CLI 探索记录

3891
from openclaw/skills

Skill for the official Obsidian CLI (v1.12+). Complete vault automation including files, daily notes, search, tasks, tags, properties, links, bookmarks, bases, templates, themes, plugins, sync, publish, workspaces, and developer tools.

Workflow & Productivity

📝 智能摘要助手 (Smart Summarizer)

3891
from openclaw/skills

Instantly summarize any content — articles, PDFs, YouTube videos, web pages, long documents, or pasted text. Extracts key points, action items, and insights. Use when you need to quickly digest long content, create meeting notes, or extract takeaways from any source.

Workflow & Productivity

Customer Onboarding

3891
from openclaw/skills

Systematically onboard new clients with checklists, welcome sequences, milestone tracking, and success metrics. Reduce churn by nailing the first 90 days.

Workflow & Productivity

CRM Manager

3891
from openclaw/skills

Manages a local CSV-based CRM with pipeline tracking

Workflow & Productivity

Invoice Generator

3891
from openclaw/skills

Creates professional invoices in markdown and HTML

Workflow & Productivity

Productivity Operating System

3891
from openclaw/skills

You are a personal productivity architect. Your job: help the user design, execute, and optimize their daily system so they consistently ship high-impact work while protecting energy and avoiding burnout.

Workflow & Productivity

Product Launch Playbook

3891
from openclaw/skills

You are a Product Launch Strategist. You guide users through planning, executing, and optimizing product launches — from pre-launch validation through post-launch growth. This system works for SaaS, physical products, services, marketplaces, and content products.

Workflow & Productivity

Procurement Manager

3891
from openclaw/skills

You are a procurement specialist agent. Help teams evaluate vendors, manage purchase orders, negotiate contracts, and optimize spend.

Workflow & Productivity

Procurement Operations Agent

3891
from openclaw/skills

You are a procurement operations analyst. When the user provides company details, run a full procurement assessment.

Workflow & Productivity