improving-frontend-coverage

Runs frontend unit tests with coverage, analyzes coverage reports, and implements meaningful tests to increase coverage by ~0.2%. Use when you want to systematically improve frontend test coverage with high-value test cases.

44,152 stars

Best use case

improving-frontend-coverage is best used when you need a repeatable AI agent workflow instead of a one-off prompt.

Runs frontend unit tests with coverage, analyzes coverage reports, and implements meaningful tests to increase coverage by ~0.2%. Use when you want to systematically improve frontend test coverage with high-value test cases.

Teams using improving-frontend-coverage should expect a more consistent output, faster repeated execution, less prompt rewriting.

When to use this skill

  • You want a reusable workflow that can be run more than once with consistent structure.

When not to use this skill

  • You only need a quick one-off answer and do not need a reusable workflow.
  • You cannot install or maintain the underlying files, dependencies, or repository context.

Installation

Claude Code / Cursor / Codex

$curl -o ~/.claude/skills/improving-frontend-coverage/SKILL.md --create-dirs "https://raw.githubusercontent.com/streamlit/streamlit/main/.claude/skills/improving-frontend-coverage/SKILL.md"

Manual Installation

  1. Download SKILL.md from GitHub
  2. Place it in .claude/skills/improving-frontend-coverage/SKILL.md inside your project
  3. Restart your AI agent — it will auto-discover the skill

How improving-frontend-coverage Compares

Feature / Agentimproving-frontend-coverageStandard Approach
Platform SupportNot specifiedLimited / Varies
Context Awareness High Baseline
Installation ComplexityUnknownN/A

Frequently Asked Questions

What does this skill do?

Runs frontend unit tests with coverage, analyzes coverage reports, and implements meaningful tests to increase coverage by ~0.2%. Use when you want to systematically improve frontend test coverage with high-value test cases.

Where can I find the source code?

You can find the source code on GitHub using the link provided at the top of the page.

Related Guides

SKILL.md Source

# Improving frontend coverage

Increase frontend unit test coverage by ~0.2% through meaningful tests that add real value.

**Be fully autonomous** — Do NOT stop or pause to ask for confirmation. Keep iterating (analyze → implement → verify) until the 0.2% coverage target is reached. If you encounter ambiguities about what to test, make a reasonable choice and proceed.

## Workflow

**Step 1: Run tests with coverage**

```bash
COVERAGE_JSON=1 make frontend-tests  # ~5 min
```

Reports generated in `frontend/coverage/`:
- `coverage-summary.json` - Per-file percentages (lines, branches, functions)
- `coverage-final.json` - Line-level data with uncovered line numbers (hit count 0 in `s`, `f`, `b` maps)

**Step 2: Analyze and prioritize**

Read `coverage-summary.json` to find files with:
1. Large size + below-average coverage (high impact)
2. Core components in `lib/src/components/`
3. Utility functions in `utils/src/`

Skip: >97% coverage, auto-generated, `.d.ts`, test files.

**Step 3: Implement tests (in subagent)**

Launch a subagent to implement tests for each prioritized file. Provide the subagent with:
- The target file path and its uncovered lines from `coverage-final.json`
- Instructions to read the source, existing tests, and write new tests
- The test selection guidelines below

The subagent should:
1. Read source and existing tests to understand gaps
2. Write tests for: conditional rendering, event handlers, error states, edge cases, accessibility
3. Follow RTL best practices: query by role/label, test behavior not implementation
4. Run the new tests to verify they pass: `cd frontend && yarn test path/to/Component.test.tsx`

**Step 4: Verify and iterate**

```bash
cd frontend && yarn test path/to/Component.test.tsx  # Run new tests
COVERAGE_JSON=1 make frontend-tests                   # Measure progress
```

**Repeat steps 2-4 until coverage improves by ≥0.2%**, then run `make check`.

**Step 5: Simplify, review, and address feedback**

Once all tests pass and coverage target is met:

1. Run the `simplifying-local-changes` subagent to clean up and simplify the code changes. Wait for completion.
2. Run the `reviewing-local-changes` subagent to review the changes. Wait for completion and read the review output.
3. Address the review feedback: for each recommendation, implement it if valid and improves code quality; skip with brief reasoning if not applicable or would over-engineer.
4. Run /checking-changes to verify everything still passes after changes.

## Test selection

**DO test:** Conditional rendering, user interactions, prop variations, error handling, accessibility, edge cases (null, empty, max values).

**DON'T test:** Pass-through props, styling, library internals, implementation details, already well-covered code.

**Coverage exclusions:** Use `/* istanbul ignore next */` sparingly for code that genuinely doesn't need testing. Always include a reason (e.g., `/* istanbul ignore next -- defensive */`):
- Browser-specific branches that can't run in jsdom (`/* istanbul ignore next -- browser-only */`)
- Defensive fallbacks that should never execute (`/* istanbul ignore next -- defensive */`)
- Framework-required boilerplate (`/* istanbul ignore next -- exhaustive */`)

## Notes

- Quality > coverage numbers - skip tests that don't catch real bugs
- Test files: co-located as `<Component>.test.tsx`
- Use `/checking-changes` after implementing tests

Related Skills

improving-python-coverage

44152
from streamlit/streamlit

Runs Python unit tests with coverage, analyzes coverage reports, and implements meaningful tests to increase coverage by ~0.2%. Use when you want to systematically improve Python test coverage with high-value test cases.

fixing-streamlit-ci

44152
from streamlit/streamlit

Analyze and fix failed GitHub Actions CI jobs for the current branch/PR. Use when CI checks fail, PR checks show failures, or you need to diagnose lint/type/test errors and verify fixes locally.

Developer ToolsClaude

fixing-flaky-e2e-tests

44152
from streamlit/streamlit

Diagnose and fix flaky Playwright e2e tests. Use when tests fail intermittently, show timeout errors, have snapshot mismatches, or exhibit browser-specific failures.

Developer ToolsClaude

finalizing-pr

44152
from streamlit/streamlit

Finalizes branch changes for merging by simplifying code, running checks, reviewing changes, and creating a PR if needed. Use when ready to merge changes into the target branch.

Developer ToolsClaude

discovering-make-commands

44152
from streamlit/streamlit

Lists available make commands for Streamlit development. Use for build, test, lint, or format tasks.

Developer ToolsClaude

debugging-streamlit

44152
from streamlit/streamlit

Debug Streamlit frontend and backend changes using make debug with hot-reload. Use when testing code changes, investigating bugs, checking UI behavior, or needing screenshots of the running app.

Developer ToolsClaude

creating-pull-requests

44152
from streamlit/streamlit

Creates a draft pull request on GitHub with proper labels, branch naming, and description formatting. Use when changes are ready to be submitted as a PR to the streamlit/streamlit repository.

Developer ToolsClaude

checking-changes

44152
from streamlit/streamlit

Validates all code changes before committing by running format, lint, type, and unit test checks. Use after making backend (Python) or frontend (TypeScript) changes, before committing or finishing a work session.

Developer ToolsClaude

assessing-external-test-risk

44152
from streamlit/streamlit

Assesses whether branch or PR changes are high-risk for externally hosted or embedded Streamlit usage and recommends whether external e2e coverage with `@pytest.mark.external_test` is needed. Use during code review, PR triage, or test planning when changes touch routing, auth, websocket/session behavior, embedding, assets, cross-origin behavior, SiS/Snowflake runtime, storage, or security headers.

Developer ToolsClaude

addressing-pr-review-comments

44152
from streamlit/streamlit

Address all valid review comments on a PR for the current branch in the streamlit/streamlit repo. Covers both inline review comments and general PR (issue) comments. Use when a PR has reviewer feedback to address, including code changes, style fixes, and documentation updates.

Developer ToolsClaude

writing-spec

44152
from streamlit/streamlit

Writes product and tech specs for new Streamlit features. Use when designing new API commands, widgets, or significant changes that need team review before implementation.

updating-internal-docs

44152
from streamlit/streamlit

Review internal documentation (*.md files) against the current codebase state and propose updates for outdated or incorrect information.