accessibility-review
Run a WCAG 2.1 AA accessibility audit on a design or page. Trigger with "audit accessibility", "check a11y", "is this accessible?", or when reviewing a design for color contrast, keyboard navigation, touch target size, or screen reader behavior before handoff.
Best use case
accessibility-review is best used when you need a repeatable AI agent workflow instead of a one-off prompt.
Run a WCAG 2.1 AA accessibility audit on a design or page. Trigger with "audit accessibility", "check a11y", "is this accessible?", or when reviewing a design for color contrast, keyboard navigation, touch target size, or screen reader behavior before handoff.
Teams using accessibility-review should expect a more consistent output, faster repeated execution, less prompt rewriting.
When to use this skill
- You want a reusable workflow that can be run more than once with consistent structure.
When not to use this skill
- You only need a quick one-off answer and do not need a reusable workflow.
- You cannot install or maintain the underlying files, dependencies, or repository context.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/accessibility-review/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How accessibility-review Compares
| Feature / Agent | accessibility-review | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | Unknown | N/A |
Frequently Asked Questions
What does this skill do?
Run a WCAG 2.1 AA accessibility audit on a design or page. Trigger with "audit accessibility", "check a11y", "is this accessible?", or when reviewing a design for color contrast, keyboard navigation, touch target size, or screen reader behavior before handoff.
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
Related Guides
AI Agents for Coding
Browse AI agent skills for coding, debugging, testing, refactoring, code review, and developer workflows across Claude, Cursor, and Codex.
Best AI Skills for Claude
Explore the best AI skills for Claude and Claude Code across coding, research, workflow automation, documentation, and agent operations.
Cursor vs Codex for AI Workflows
Compare Cursor and Codex for AI coding workflows, repository assistance, debugging, refactoring, and reusable developer skills.
SKILL.md Source
# /accessibility-review > If you see unfamiliar placeholders or need to check which tools are connected, see [CONNECTORS.md](../../CONNECTORS.md). Audit a design or page for WCAG 2.1 AA accessibility compliance. ## Usage ``` /accessibility-review $ARGUMENTS ``` Audit for accessibility: @$1 ## WCAG 2.1 AA Quick Reference ### Perceivable - **1.1.1** Non-text content has alt text - **1.3.1** Info and structure conveyed semantically - **1.4.3** Contrast ratio >= 4.5:1 (normal text), >= 3:1 (large text) - **1.4.11** Non-text contrast >= 3:1 (UI components, graphics) ### Operable - **2.1.1** All functionality available via keyboard - **2.4.3** Logical focus order - **2.4.7** Visible focus indicator - **2.5.5** Touch target >= 44x44 CSS pixels ### Understandable - **3.2.1** Predictable on focus (no unexpected changes) - **3.3.1** Error identification (describe the error) - **3.3.2** Labels or instructions for inputs ### Robust - **4.1.2** Name, role, value for all UI components ## Common Issues 1. Insufficient color contrast 2. Missing form labels 3. No keyboard access to interactive elements 4. Missing alt text on meaningful images 5. Focus traps in modals 6. Missing ARIA landmarks 7. Auto-playing media without controls 8. Time limits without extension options ## Testing Approach 1. Automated scan (catches ~30% of issues) 2. Keyboard-only navigation 3. Screen reader testing (VoiceOver, NVDA) 4. Color contrast verification 5. Zoom to 200% — does layout break? ## Output ```markdown ## Accessibility Audit: [Design/Page Name] **Standard:** WCAG 2.1 AA | **Date:** [Date] ### Summary **Issues found:** [X] | **Critical:** [X] | **Major:** [X] | **Minor:** [X] ### Findings #### Perceivable | # | Issue | WCAG Criterion | Severity | Recommendation | |---|-------|---------------|----------|----------------| | 1 | [Issue] | [1.4.3 Contrast] | 🔴 Critical | [Fix] | #### Operable | # | Issue | WCAG Criterion | Severity | Recommendation | |---|-------|---------------|----------|----------------| | 1 | [Issue] | [2.1.1 Keyboard] | 🟡 Major | [Fix] | #### Understandable | # | Issue | WCAG Criterion | Severity | Recommendation | |---|-------|---------------|----------|----------------| | 1 | [Issue] | [3.3.2 Labels] | 🟢 Minor | [Fix] | #### Robust | # | Issue | WCAG Criterion | Severity | Recommendation | |---|-------|---------------|----------|----------------| | 1 | [Issue] | [4.1.2 Name, Role, Value] | 🟡 Major | [Fix] | ### Color Contrast Check | Element | Foreground | Background | Ratio | Required | Pass? | |---------|-----------|------------|-------|----------|-------| | [Body text] | [color] | [color] | [X]:1 | 4.5:1 | ✅/❌ | ### Keyboard Navigation | Element | Tab Order | Enter/Space | Escape | Arrow Keys | |---------|-----------|-------------|--------|------------| | [Element] | [Order] | [Behavior] | [Behavior] | [Behavior] | ### Screen Reader | Element | Announced As | Issue | |---------|-------------|-------| | [Element] | [What SR says] | [Problem if any] | ### Priority Fixes 1. **[Critical fix]** — Affects [who] and blocks [what] 2. **[Major fix]** — Improves [what] for [who] 3. **[Minor fix]** — Nice to have ``` ## If Connectors Available If **~~design tool** is connected: - Inspect color values, font sizes, and touch targets directly from Figma - Check component ARIA roles and keyboard behavior in the design spec If **~~project tracker** is connected: - Create tickets for each accessibility finding with severity and WCAG criterion - Link findings to existing accessibility remediation epics ## Tips 1. **Start with contrast and keyboard** — These catch the most common and impactful issues. 2. **Test with real assistive technology** — My audit is a great start, but manual testing with VoiceOver/NVDA catches things I can't. 3. **Prioritize by impact** — Fix issues that block users first, polish later.
Related Skills
pipeline-review
Analyze pipeline health — prioritize deals, flag risks, get a weekly action plan. Use when running a weekly pipeline review, deciding which deals to focus on this week, spotting stale or stuck opportunities, auditing for hygiene issues like bad close dates, or identifying single-threaded deals.
metrics-review
Review and analyze product metrics with trend analysis and actionable insights. Use when running a weekly, monthly, or quarterly metrics review, investigating a sudden spike or drop, comparing performance against targets, or turning raw numbers into a scorecard with recommended actions.
vendor-review
Evaluate a vendor — cost analysis, risk assessment, and recommendation. Use when reviewing a new vendor proposal, deciding whether to renew or replace a contract, comparing two vendors side-by-side, or building a TCO breakdown and negotiation points before procurement sign-off.
brand-review
Review content against your brand voice, style guide, and messaging pillars, flagging deviations by severity with specific before/after fixes. Use when checking a draft before it ships, when auditing copy for voice consistency and terminology, or when screening for unsubstantiated claims, missing disclaimers, and other legal flags.
review-contract
Review a contract against your organization's negotiation playbook — flag deviations, generate redlines, provide business impact analysis. Use when reviewing vendor or customer agreements, when you need clause-by-clause analysis against standard positions, or when preparing a negotiation strategy with prioritized redlines and fallback positions.
performance-review
Structure a performance review with self-assessment, manager template, and calibration prep. Use when review season kicks off and you need a self-assessment template, writing a manager review for a direct report, prepping rating distributions and promotion cases for calibration, or turning vague feedback into specific behavioral examples.
code-review
Review code changes for security, performance, and correctness. Trigger with a PR URL or diff, "review this before I merge", "is this code safe?", or when checking a change for N+1 queries, injection risks, missing edge cases, or error handling gaps.
forecast
Generate a weighted sales forecast with best/likely/worst scenarios, commit vs. upside breakdown, and gap analysis. Use when preparing a quarterly forecast call, assessing gap-to-quota from a pipeline CSV, deciding which deals to commit vs. call upside, or checking pipeline coverage against your number.
draft-outreach
Research a prospect then draft personalized outreach. Uses web research by default, supercharged with enrichment and CRM. Trigger with "draft outreach to [person/company]", "write cold email to [prospect]", "reach out to [name]".
daily-briefing
Start your day with a prioritized sales briefing. Works standalone when you tell me your meetings and priorities, supercharged when you connect your calendar, CRM, and email. Trigger with "morning briefing", "daily brief", "what's on my plate today", "prep my day", or "start my day".
create-an-asset
Generate tailored sales assets (landing pages, decks, one-pagers, workflow demos) from your deal context. Describe your prospect, audience, and goal — get a polished, branded asset ready to share with customers.
competitive-intelligence
Research your competitors and build an interactive battlecard. Outputs an HTML artifact with clickable competitor cards and a comparison matrix. Trigger with "competitive intel", "research competitors", "how do we compare to [competitor]", "battlecard for [competitor]", or "what's new with [competitor]".