automation-audit-ops
Evidence-first automation inventory and overlap audit workflow for ECC. Use when the user wants to know which jobs, hooks, connectors, MCP servers, or wrappers are live, broken, redundant, or missing before fixing anything.
Best use case
automation-audit-ops is best used when you need a repeatable AI agent workflow instead of a one-off prompt.
Evidence-first automation inventory and overlap audit workflow for ECC. Use when the user wants to know which jobs, hooks, connectors, MCP servers, or wrappers are live, broken, redundant, or missing before fixing anything.
Teams using automation-audit-ops should expect a more consistent output, faster repeated execution, less prompt rewriting.
When to use this skill
- You want a reusable workflow that can be run more than once with consistent structure.
When not to use this skill
- You only need a quick one-off answer and do not need a reusable workflow.
- You cannot install or maintain the underlying files, dependencies, or repository context.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/automation-audit-ops/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How automation-audit-ops Compares
| Feature / Agent | automation-audit-ops | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | Unknown | N/A |
Frequently Asked Questions
What does this skill do?
Evidence-first automation inventory and overlap audit workflow for ECC. Use when the user wants to know which jobs, hooks, connectors, MCP servers, or wrappers are live, broken, redundant, or missing before fixing anything.
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
Related Guides
Top AI Agents for Productivity
See the top AI agent skills for productivity, workflow automation, operational systems, documentation, and everyday task execution.
Best AI Skills for Claude
Explore the best AI skills for Claude and Claude Code across coding, research, workflow automation, documentation, and agent operations.
ChatGPT vs Claude for Agent Skills
Compare ChatGPT and Claude for AI agent skills across coding, writing, research, and reusable workflow execution.
SKILL.md Source
# Automation Audit Ops Use this when the user asks what automations are live, which jobs are broken, where overlap exists, or what tooling and connectors are actually doing useful work right now. This is an audit-first operator skill. The job is to produce an evidence-backed inventory and a keep / merge / cut / fix-next recommendation set before rewriting anything. ## Skill Stack Pull these ECC-native skills into the workflow when relevant: - `workspace-surface-audit` for connector, MCP, hook, and app inventory - `knowledge-ops` when the audit needs to reconcile live repo truth with durable context - `github-ops` when the answer depends on CI, scheduled workflows, issues, or PR automation - `ecc-tools-cost-audit` when the real problem is webhook fanout, queued jobs, or billing burn in the sibling app repo - `research-ops` when local inventory must be compared against current platform support or public docs - `verification-loop` for proving post-fix state instead of relying on assumed recovery ## When to Use - user asks "what automations do I have", "what is live", "what is broken", or "what overlaps" - the task spans cron jobs, GitHub Actions, local hooks, MCP servers, connectors, wrappers, or app integrations - the user wants to know what was ported from another agent system and what still needs to be rebuilt inside ECC - the workspace has accumulated multiple ways to do the same thing and the user wants one canonical lane ## Guardrails - start read-only unless the user explicitly asked for fixes - separate: - configured - authenticated - recently verified - stale or broken - missing entirely - do not claim a tool is live just because a skill or config references it - do not merge or delete overlapping surfaces until the evidence table exists ## Workflow ### 1. Inventory the real surface Read the current live surface before theorizing: - repo hooks and local hook scripts - GitHub Actions and scheduled workflows - MCP configs and enabled servers - connector- or app-backed integrations - wrapper scripts and repo-specific automation entrypoints Group them by surface: - local runtime - repo CI / automation - connected external systems - messaging / notifications - billing / customer operations - research / monitoring ### 2. Classify each item by live state For every surfaced automation, mark: - configured - authenticated - recently verified - stale or broken - missing Then classify the problem type: - active breakage - auth outage - stale status - overlap or redundancy - missing capability ### 3. Trace the proof path Back every important claim with a concrete source: - file path - workflow run - hook log - config entry - recent command output - exact failure signature If the current state is ambiguous, say so directly instead of pretending the audit is complete. ### 4. End with keep / merge / cut / fix-next For each overlapping or suspect surface, return one call: - keep - merge - cut - fix next The value is in collapsing noisy automation into one canonical ECC lane, not in preserving every historical path. ## Output Format ```text CURRENT SURFACE - automation - source - live state - proof FINDINGS - active breakage - overlap - stale status - missing capability RECOMMENDATION - keep - merge - cut - fix next NEXT ECC MOVE - exact skill / hook / workflow / app lane to strengthen ``` ## Pitfalls - do not answer from memory when the live inventory can be read - do not treat "present in config" as "working" - do not fix lower-value redundancy before naming the broken high-signal path - do not widen the task into a repo rewrite if the user asked for inventory first ## Verification - important claims cite a live proof path - each surfaced automation is labeled with a clear live-state category - the final recommendation distinguishes keep / merge / cut / fix-next
Related Skills
workspace-surface-audit
Audit the active repo, MCP servers, plugins, connectors, env surfaces, and harness setup, then recommend the highest-value ECC-native skills, hooks, agents, and operator workflows. Use when the user wants help setting up Claude Code or understanding what capabilities are actually available in their environment.
click-path-audit
Trace every user-facing button/touchpoint through its full state change sequence to find bugs where functions individually work but cancel each other out, produce wrong final state, or leave the UI in an inconsistent state. Use when: systematic debugging found no bugs but users report broken buttons, or after any major refactor touching shared state stores.
ecc-tools-cost-audit
Evidence-first ECC Tools burn and billing audit workflow. Use when investigating runaway PR creation, quota bypass, premium-model leakage, duplicate jobs, or GitHub App cost spikes in the ECC Tools repo.
ui-demo
Record polished UI demo videos using Playwright. Use when the user asks to create a demo, walkthrough, screen recording, or tutorial video of a web application. Produces WebM videos with visible cursor, natural pacing, and professional feel.
token-budget-advisor
Offers the user an informed choice about how much response depth to consume before answering. Use this skill when the user explicitly wants to control response length, depth, or token budget. TRIGGER when: "token budget", "token count", "token usage", "token limit", "response length", "answer depth", "short version", "brief answer", "detailed answer", "exhaustive answer", "respuesta corta vs larga", "cuántos tokens", "ahorrar tokens", "responde al 50%", "dame la versión corta", "quiero controlar cuánto usas", or clear variants where the user is explicitly asking to control answer size or depth. DO NOT TRIGGER when: user has already specified a level in the current session (maintain it), the request is clearly a one-word answer, or "token" refers to auth/session/payment tokens rather than response size.
skill-comply
Visualize whether skills, rules, and agent definitions are actually followed — auto-generates scenarios at 3 prompt strictness levels, runs agents, classifies behavioral sequences, and reports compliance rates with full tool call timelines
santa-method
Multi-agent adversarial verification with convergence loop. Two independent review agents must both pass before output ships.
safety-guard
Use this skill to prevent destructive operations when working on production systems or running agents autonomously.
repo-scan
Cross-stack source code asset audit — classifies every file, detects embedded third-party libraries, and delivers actionable four-level verdicts per module with interactive HTML reports.
project-flow-ops
Operate execution flow across GitHub and Linear by triaging issues and pull requests, linking active work, and keeping GitHub public-facing while Linear remains the internal execution layer. Use when the user wants backlog control, PR triage, or GitHub-to-Linear coordination.
product-lens
Use this skill to validate the "why" before building, run product diagnostics, and pressure-test product direction before the request becomes an implementation contract.
openclaw-persona-forge
为 OpenClaw AI Agent 锻造完整的龙虾灵魂方案。根据用户偏好或随机抽卡, 输出身份定位、灵魂描述(SOUL.md)、角色化底线规则、名字和头像生图提示词。 如当前环境提供已审核的生图 skill,可自动生成统一风格头像图片。 当用户需要创建、设计或定制 OpenClaw 龙虾灵魂时使用。 不适用于:微调已有 SOUL.md、非 OpenClaw 平台的角色设计、纯工具型无性格 Agent。 触发词:龙虾灵魂、虾魂、OpenClaw 灵魂、养虾灵魂、龙虾角色、龙虾定位、 龙虾剧本杀角色、龙虾游戏角色、龙虾 NPC、龙虾性格、龙虾背景故事、 lobster soul、lobster character、抽卡、随机龙虾、龙虾 SOUL、gacha。