improving-frontend-coverage
Runs frontend unit tests with coverage, analyzes coverage reports, and implements meaningful tests to increase coverage by ~0.2%. Use when you want to systematically improve frontend test coverage with high-value test cases.
Best use case
improving-frontend-coverage is best used when you need a repeatable AI agent workflow instead of a one-off prompt.
Runs frontend unit tests with coverage, analyzes coverage reports, and implements meaningful tests to increase coverage by ~0.2%. Use when you want to systematically improve frontend test coverage with high-value test cases.
Teams using improving-frontend-coverage should expect a more consistent output, faster repeated execution, less prompt rewriting.
When to use this skill
- You want a reusable workflow that can be run more than once with consistent structure.
When not to use this skill
- You only need a quick one-off answer and do not need a reusable workflow.
- You cannot install or maintain the underlying files, dependencies, or repository context.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/improving-frontend-coverage/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How improving-frontend-coverage Compares
| Feature / Agent | improving-frontend-coverage | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | Unknown | N/A |
Frequently Asked Questions
What does this skill do?
Runs frontend unit tests with coverage, analyzes coverage reports, and implements meaningful tests to increase coverage by ~0.2%. Use when you want to systematically improve frontend test coverage with high-value test cases.
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
Related Guides
AI Agents for Coding
Browse AI agent skills for coding, debugging, testing, refactoring, code review, and developer workflows across Claude, Cursor, and Codex.
AI Agents for Marketing
Discover AI agents for marketing workflows, from SEO and content production to campaign research, outreach, and analytics.
AI Agents for Startups
Explore AI agent skills for startup validation, product research, growth experiments, documentation, and fast execution with small teams.
SKILL.md Source
# Improving frontend coverage Increase frontend unit test coverage by ~0.2% through meaningful tests that add real value. **Be fully autonomous** — Do NOT stop or pause to ask for confirmation. Keep iterating (analyze → implement → verify) until the 0.2% coverage target is reached. If you encounter ambiguities about what to test, make a reasonable choice and proceed. ## Workflow **Step 1: Run tests with coverage** ```bash COVERAGE_JSON=1 make frontend-tests # ~5 min ``` Reports generated in `frontend/coverage/`: - `coverage-summary.json` - Per-file percentages (lines, branches, functions) - `coverage-final.json` - Line-level data with uncovered line numbers (hit count 0 in `s`, `f`, `b` maps) **Step 2: Analyze and prioritize** Read `coverage-summary.json` to find files with: 1. Large size + below-average coverage (high impact) 2. Core components in `lib/src/components/` 3. Utility functions in `utils/src/` Skip: >97% coverage, auto-generated, `.d.ts`, test files. **Step 3: Implement tests (in subagent)** Launch a subagent to implement tests for each prioritized file. Provide the subagent with: - The target file path and its uncovered lines from `coverage-final.json` - Instructions to read the source, existing tests, and write new tests - The test selection guidelines below The subagent should: 1. Read source and existing tests to understand gaps 2. Write tests for: conditional rendering, event handlers, error states, edge cases, accessibility 3. Follow RTL best practices: query by role/label, test behavior not implementation 4. Run the new tests to verify they pass: `cd frontend && yarn test path/to/Component.test.tsx` **Step 4: Verify and iterate** ```bash cd frontend && yarn test path/to/Component.test.tsx # Run new tests COVERAGE_JSON=1 make frontend-tests # Measure progress ``` **Repeat steps 2-4 until coverage improves by ≥0.2%**, then run `make check`. **Step 5: Simplify, review, and address feedback** Once all tests pass and coverage target is met: 1. Run the `simplifying-local-changes` subagent to clean up and simplify the code changes. Wait for completion. 2. Run the `reviewing-local-changes` subagent to review the changes. Wait for completion and read the review output. 3. Address the review feedback: for each recommendation, implement it if valid and improves code quality; skip with brief reasoning if not applicable or would over-engineer. 4. Run /checking-changes to verify everything still passes after changes. ## Test selection **DO test:** Conditional rendering, user interactions, prop variations, error handling, accessibility, edge cases (null, empty, max values). **DON'T test:** Pass-through props, styling, library internals, implementation details, already well-covered code. **Coverage exclusions:** Use `/* istanbul ignore next */` sparingly for code that genuinely doesn't need testing. Always include a reason (e.g., `/* istanbul ignore next -- defensive */`): - Browser-specific branches that can't run in jsdom (`/* istanbul ignore next -- browser-only */`) - Defensive fallbacks that should never execute (`/* istanbul ignore next -- defensive */`) - Framework-required boilerplate (`/* istanbul ignore next -- exhaustive */`) ## Notes - Quality > coverage numbers - skip tests that don't catch real bugs - Test files: co-located as `<Component>.test.tsx` - Use `/checking-changes` after implementing tests
Related Skills
improving-python-coverage
Runs Python unit tests with coverage, analyzes coverage reports, and implements meaningful tests to increase coverage by ~0.2%. Use when you want to systematically improve Python test coverage with high-value test cases.
fixing-streamlit-ci
Analyze and fix failed GitHub Actions CI jobs for the current branch/PR. Use when CI checks fail, PR checks show failures, or you need to diagnose lint/type/test errors and verify fixes locally.
fixing-flaky-e2e-tests
Diagnose and fix flaky Playwright e2e tests. Use when tests fail intermittently, show timeout errors, have snapshot mismatches, or exhibit browser-specific failures.
finalizing-pr
Finalizes branch changes for merging by simplifying code, running checks, reviewing changes, and creating a PR if needed. Use when ready to merge changes into the target branch.
discovering-make-commands
Lists available make commands for Streamlit development. Use for build, test, lint, or format tasks.
debugging-streamlit
Debug Streamlit frontend and backend changes using make debug with hot-reload. Use when testing code changes, investigating bugs, checking UI behavior, or needing screenshots of the running app.
creating-pull-requests
Creates a draft pull request on GitHub with proper labels, branch naming, and description formatting. Use when changes are ready to be submitted as a PR to the streamlit/streamlit repository.
checking-changes
Validates all code changes before committing by running format, lint, type, and unit test checks. Use after making backend (Python) or frontend (TypeScript) changes, before committing or finishing a work session.
assessing-external-test-risk
Assesses whether branch or PR changes are high-risk for externally hosted or embedded Streamlit usage and recommends whether external e2e coverage with `@pytest.mark.external_test` is needed. Use during code review, PR triage, or test planning when changes touch routing, auth, websocket/session behavior, embedding, assets, cross-origin behavior, SiS/Snowflake runtime, storage, or security headers.
addressing-pr-review-comments
Address all valid review comments on a PR for the current branch in the streamlit/streamlit repo. Covers both inline review comments and general PR (issue) comments. Use when a PR has reviewer feedback to address, including code changes, style fixes, and documentation updates.
writing-spec
Writes product and tech specs for new Streamlit features. Use when designing new API commands, widgets, or significant changes that need team review before implementation.
updating-internal-docs
Review internal documentation (*.md files) against the current codebase state and propose updates for outdated or incorrect information.