project-skill-audit
Analyze a project and recommend highest-value skills to create or update. Use when: auditing project skills, getting skill recommendations, or reviewing existing skill coverage.
Best use case
project-skill-audit is best used when you need a repeatable AI agent workflow instead of a one-off prompt.
Analyze a project and recommend highest-value skills to create or update. Use when: auditing project skills, getting skill recommendations, or reviewing existing skill coverage.
Teams using project-skill-audit should expect a more consistent output, faster repeated execution, less prompt rewriting.
When to use this skill
- You want a reusable workflow that can be run more than once with consistent structure.
When not to use this skill
- You only need a quick one-off answer and do not need a reusable workflow.
- You cannot install or maintain the underlying files, dependencies, or repository context.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/project-skill-audit/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How project-skill-audit Compares
| Feature / Agent | project-skill-audit | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | Unknown | N/A |
Frequently Asked Questions
What does this skill do?
Analyze a project and recommend highest-value skills to create or update. Use when: auditing project skills, getting skill recommendations, or reviewing existing skill coverage.
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
SKILL.md Source
# Project Skill Audit ## Overview Audit the project's real recurring workflows before recommending skills. Prefer evidence from memory, rollout summaries, existing skill folders, and current repo conventions over generic brainstorming. Recommend updates before new skills when an existing project skill is already close to the needed behavior. ## Workflow 1. Map the current project surface. Identify the repo root and read the most relevant project guidance first, such as `AGENTS.md`, `README.md`, roadmap/ledger files, and local docs that define workflows or validation expectations. 2. Build the memory/session path first. Resolve the memory base as `$CODEX_HOME` when set, otherwise default to `~/.codex`. Use these locations: - memory index: `$CODEX_HOME/memories/MEMORY.md` or `~/.codex/memories/MEMORY.md` - rollout summaries: `$CODEX_HOME/memories/rollout_summaries/` - raw sessions: `$CODEX_HOME/sessions/` or `~/.codex/sessions/` 3. Read project past sessions in this order. If the runtime prompt already includes a memory summary, start there. Then search `MEMORY.md` for: - repo name - repo basename - current `cwd` - important module or file names Open only the 1-3 most relevant rollout summaries first. Fall back to raw session JSONL only when the summaries are missing the exact evidence you need. 4. Scan existing project-local skills before suggesting anything new. Check these locations relative to the current repo root: - `.agents/skills` - `.codex/skills` - `skills` Read both `SKILL.md` and `agents/openai.yaml` when present. 5. Compare project-local skills against recurring work. Look for repeated patterns in past sessions: - repeated validation sequences - repeated failure shields - recurring ownership boundaries - repeated root-cause categories - workflows that repeatedly require the same repo-specific context If the pattern appears repeatedly and is not already well captured, it is a candidate skill. 6. Separate `new skill` from `update existing skill`. Recommend an update when an existing skill is already the right bucket but has stale triggers, missing guardrails, outdated paths, weak validation instructions, or incomplete scope. Recommend a new skill only when the workflow is distinct enough that stretching an existing skill would make it vague or confusing. 7. Check for overlap with global skills only after reviewing project-local skills. Use `$CODEX_HOME/skills` and `$CODEX_HOME/skills/public` to avoid proposing project-local skills for workflows already solved well by a generic shared skill. Do not reject a project-local skill just because a global skill exists; project-specific guardrails can still justify a local specialization. ## Session Analysis ### 1. Search memory index first - Search `MEMORY.md` with `rg` using the repo name, basename, and `cwd`. - Prefer entries that already cite rollout summaries with the same repo path. - Capture: - repeated workflows - validation commands - failure shields - ownership boundaries - milestone or roadmap coupling ### 2. Open targeted rollout summaries - Open the most relevant summary files under `memories/rollout_summaries/`. - Prefer summaries whose filenames, `cwd`, or `keywords` match the current project. - Extract: - what the user asked for repeatedly - what steps kept recurring - what broke repeatedly - what commands proved correctness - what project-specific context had to be rediscovered ### 3. Use raw sessions only as a fallback - Only search `sessions/` JSONL files if rollout summaries are missing a concrete detail. - Search by: - exact `cwd` - repo basename - thread ID from a rollout summary - specific file paths or commands - Use raw sessions to recover exact prompts, command sequences, diffs, or failure text, not to replace the summary pass. ### 4. Turn session evidence into skill candidates - A candidate `new skill` should correspond to a repeated workflow, not just a repeated topic. - A candidate `skill update` should correspond to a workflow already covered by a local skill whose triggers, guardrails, or validation instructions no longer match the recorded sessions. - Prefer concrete evidence such as: - "this validation sequence appeared in 4 sessions" - "this ownership confusion repeated across extractor and runtime fixes" - "the same local script and telemetry probes had to be rediscovered repeatedly" ## Recommendation Rules - Recommend a new skill when: - the same repo-specific workflow or failure mode appears multiple times across sessions - success depends on project-specific paths, scripts, ownership rules, or validation steps - the workflow benefits from strong defaults or failure shields - Recommend an update when: - an existing project-local skill already covers most of the need - `SKILL.md` and `agents/openai.yaml` drift from each other - paths, scripts, validation commands, or milestone references are stale - the skill body is too generic to reflect how the project is actually worked on - Do not recommend a skill when: - the pattern is a one-off bug rather than a reusable workflow - a generic global skill already fits with no meaningful project-specific additions - the workflow has not recurred enough to justify the maintenance cost ## What To Scan - Past sessions and memory: - memory summary already in context, if any - `$CODEX_HOME/memories/MEMORY.md` or `~/.codex/memories/MEMORY.md` - the 1-3 most relevant rollout summaries for the current repo - raw `$CODEX_HOME/sessions` or `~/.codex/sessions` JSONL files only if summaries are insufficient - Project-local skill surface: - `./.agents/skills/*/SKILL.md` - `./.agents/skills/*/agents/openai.yaml` - `./.codex/skills/*/SKILL.md` - `./skills/*/SKILL.md` - Project conventions: - `AGENTS.md` - `README.md` - roadmap, ledger, architecture, or validation docs - current worktree or recent touched areas if needed for context ## Output Expectations Return a compact audit with: 1. `Existing skills` List the project-local skills found and the main workflow each one covers. 2. `Suggested updates` For each update candidate, include: - skill name - why it is incomplete or stale - the highest-value change to make 3. `Suggested new skills` For each new skill, include: - recommended skill name - why it should exist - what would trigger it - the core workflow it should encode 4. `Priority order` Rank the top recommendations by expected value. ## Naming Guidance - Prefer short hyphen-case names. - Use project prefixes for project-local skills when that improves clarity. - Prefer verb-led or action-oriented names over vague nouns. ## Failure Shields - Do not invent recurring patterns without session or repo evidence. - Do not recommend duplicate skills when an update to an existing skill would suffice. - Do not rely on a single memory note if the current repo clearly evolved since then. - Do not bulk-load all rollout summaries; stay targeted. - Do not skip rollout summaries and jump straight to raw sessions unless the summaries are insufficient. - Do not recommend skills from themes alone; recommendations should come from repeated procedures, repeated validation flows, or repeated failure modes. - Do not confuse a project's current implementation tasks with its reusable skill needs. ## Follow-up If the user asks to actually create or update one of the recommended skills, proceed to implement the chosen skill rather than continuing the audit.
Related Skills
swiftui-performance-audit
Audit and optimize SwiftUI runtime performance. Use when: diagnosing slow rendering, janky scrolling, excessive view updates, or high CPU/memory usage in SwiftUI apps.
seo-audit
When the user wants to audit, review, or diagnose SEO issues on their site. Also use when the user mentions "SEO audit," "technical SEO," "why am I not ranking," "SEO issues," "on-page SEO," "meta tags review," or "SEO health check." For building pages at scale to target keywords, see programmatic-seo. For adding structured data, see schema-markup.
security-audit
Scan code for security vulnerabilities, misconfigurations, and exposed secrets. Use when a user asks to audit security, find vulnerabilities, check for OWASP issues, scan for secrets, review dependencies for CVEs, detect SQL injection, find XSS vulnerabilities, or harden an application. Covers OWASP Top 10, dependency auditing, secrets detection, and generates fix recommendations with severity ratings.
audit-logging
Implement tamper-evident audit logs for compliance (SOC 2, HIPAA, PCI DSS). Use when building compliance audit trails, tracking who did what and when, or implementing immutable event logs that satisfy regulatory retention requirements.
accessibility-auditor
Audit web pages and components for WCAG 2.2 accessibility compliance. Use when a user asks to check accessibility, find a11y issues, audit for WCAG compliance, fix screen reader problems, check color contrast, ensure keyboard navigation works, or prepare for accessibility regulations like the European Accessibility Act or ADA.
zustand
You are an expert in Zustand, the small, fast, and scalable state management library for React. You help developers manage global state without boilerplate using Zustand's hook-based stores, selectors for performance, middleware (persist, devtools, immer), computed values, and async actions — replacing Redux complexity with a simple, un-opinionated API in under 1KB.
zoho
Integrate and automate Zoho products. Use when a user asks to work with Zoho CRM, Zoho Books, Zoho Desk, Zoho Projects, Zoho Mail, or Zoho Creator, build custom integrations via Zoho APIs, automate workflows with Deluge scripting, sync data between Zoho apps and external systems, manage leads and deals, automate invoicing, build custom Zoho Creator apps, set up webhooks, or manage Zoho organization settings. Covers Zoho CRM, Books, Desk, Projects, Creator, and cross-product integrations.
zod
You are an expert in Zod, the TypeScript-first schema declaration and validation library. You help developers define schemas that validate data at runtime AND infer TypeScript types at compile time — eliminating the need to write types and validators separately. Used for API input validation, form validation, environment variables, config files, and any data boundary.
zipkin
Deploy and configure Zipkin for distributed tracing and request flow visualization. Use when a user needs to set up trace collection, instrument Java/Spring or other services with Zipkin, analyze service dependencies, or configure storage backends for trace data.
zig
Expert guidance for Zig, the systems programming language focused on performance, safety, and readability. Helps developers write high-performance code with compile-time evaluation, seamless C interop, no hidden control flow, and no garbage collector. Zig is used for game engines, operating systems, networking, and as a C/C++ replacement.
zed
Expert guidance for Zed, the high-performance code editor built in Rust with native collaboration, AI integration, and GPU-accelerated rendering. Helps developers configure Zed, create custom extensions, set up collaborative editing sessions, and integrate AI assistants for productive coding.
zeabur
Expert guidance for Zeabur, the cloud deployment platform that auto-detects frameworks, builds and deploys applications with zero configuration, and provides managed services like databases and message queues. Helps developers deploy full-stack applications with automatic scaling and one-click marketplace services.