tests-maintenance
Maintains IdeaVim test suite quality. Reviews disabled tests, ensures Neovim annotations are documented, and improves test readability. Use for periodic test maintenance.
Best use case
tests-maintenance is best used when you need a repeatable AI agent workflow instead of a one-off prompt.
Maintains IdeaVim test suite quality. Reviews disabled tests, ensures Neovim annotations are documented, and improves test readability. Use for periodic test maintenance.
Teams using tests-maintenance should expect a more consistent output, faster repeated execution, less prompt rewriting.
When to use this skill
- You want a reusable workflow that can be run more than once with consistent structure.
When not to use this skill
- You only need a quick one-off answer and do not need a reusable workflow.
- You cannot install or maintain the underlying files, dependencies, or repository context.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/tests-maintenance/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How tests-maintenance Compares
| Feature / Agent | tests-maintenance | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | Unknown | N/A |
Frequently Asked Questions
What does this skill do?
Maintains IdeaVim test suite quality. Reviews disabled tests, ensures Neovim annotations are documented, and improves test readability. Use for periodic test maintenance.
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
Related Guides
SKILL.md Source
# Tests Maintenance Skill You are a test maintenance specialist for the IdeaVim project. Your job is to keep the test suite healthy by reviewing test quality, checking disabled tests, and ensuring proper documentation of test exclusions. ## Scope **DO:** - Review test quality and readability - Check if disabled tests can be re-enabled - Ensure Neovim test exclusions are well-documented - Improve test content (replace meaningless strings) **DON'T:** - Fix bugs in source code - Implement new features - Make changes to production code ## How to Select Tests Each run should focus on a small subset. Use one of these strategies: ```bash # Get a random test file find . -path "*/test/*" -name "*Test*.kt" -not -path "*/build/*" | shuf -n 1 # Or focus on specific areas: # - src/test/java/org/jetbrains/plugins/ideavim/action/ # - src/test/java/org/jetbrains/plugins/ideavim/ex/ # - src/test/java/org/jetbrains/plugins/ideavim/extension/ # - tests/java-tests/src/test/kotlin/ ``` ## What to Check ### 1. Disabled Tests (@Disabled) Find disabled tests and check if they can be re-enabled: ```bash # Find all @Disabled tests grep -rn "@Disabled" --include="*.kt" src/test tests/ ``` For each disabled test: 1. **Try running it**: `./gradlew test --tests "ClassName.testMethod"` 2. **If it passes**: Investigate what changed, re-enable with explanation 3. **If it fails**: Ensure reason is documented in @Disabled annotation 4. **If obsolete**: Remove tests for features that no longer exist ### 2. Neovim Test Exclusions (@TestWithoutNeovim) Tests excluded from Neovim verification must have clear documentation. ```bash # Find TestWithoutNeovim usages grep -rn "@TestWithoutNeovim" --include="*.kt" src/test tests/ # Find those without description (needs fixing) grep -rn "@TestWithoutNeovim(SkipNeovimReason\.[A-Z_]*)" --include="*.kt" src/test ``` #### SkipNeovimReason Categories | Reason | When to Use | |--------|-------------| | `PLUGIN` | IdeaVim extension-specific behavior (surround, commentary, etc.) | | `INLAYS` | Test involves IntelliJ inlays (not present in Vim) | | `OPTION` | IdeaVim-specific option behavior | | `UNCLEAR` | Expected behavior is unclear - needs investigation | | `NON_ASCII` | Non-ASCII character handling differs | | `MAPPING` | Mapping-specific test | | `SELECT_MODE` | Vim's select mode | | `VISUAL_BLOCK_MODE` | Visual block mode edge cases | | `DIFFERENT` | Intentionally different behavior from Vim | | `NOT_VIM_TESTING` | Test doesn't verify Vim behavior (IDE integration, etc.) | | `SHOW_CMD` | :showcmd related differences | | `SCROLL` | Scrolling behavior (viewport differs) | | `TEMPLATES` | IntelliJ live templates | | `EDITOR_MODIFICATION` | Editor-specific modifications | | `CMD` | Command-line mode differences | | `ACTION_COMMAND` | `:action` command (IDE-specific) | | `PLUG` | `<Plug>` mappings | | `FOLDING` | Code folding (IDE feature) | | `TABS` | Tab/window management differences | | `PLUGIN_ERROR` | Plugin execution error handling | | `VIM_SCRIPT` | VimScript implementation differences | | `GUARDED_BLOCKS` | IDE guarded/read-only blocks | | `CTRL_CODES` | Control code handling | | `BUG_IN_NEOVIM` | Known Neovim bug (not IdeaVim issue) | | `PSI` | IntelliJ PSI/code intelligence features | **Requirements:** - Add `description` parameter for non-obvious cases - Check if the reason is still valid - Consider if test could be split: part that works with Neovim, part that doesn't ### 3. Test Quality & Readability **Meaningful test content**: Avoid senseless text. Look for: ```bash grep -rn "asdf\|qwerty\|xxxxx\|aaaaa\|dhjkw" --include="*.kt" src/test tests/ ``` Replace with: - Actual code snippets relevant to the test - Lorem Ipsum template from CONTRIBUTING.md - Realistic text demonstrating the feature **Test naming**: Names should explain what's being tested. ### 4. @VimBehaviorDiffers Annotation Tests marked with this document intentional differences from Vim: ```kotlin @VimBehaviorDiffers( originalVimAfter = "expected vim result", description = "why IdeaVim differs", shouldBeFixed = true/false ) ``` Check: - Is the difference still valid? - If `shouldBeFixed = true`, is there a YouTrack issue? - Can behavior now be aligned with Vim? ## Making Changes ### When to Change **DO fix:** - Unclear or missing test descriptions - Senseless test content - Disabled tests that now pass - Incorrect `@TestWithoutNeovim` reasons - Missing `description` on annotations **DON'T:** - Fix source code bugs - Implement missing features - Major refactoring without clear benefit ### Commit Messages ``` tests: Re-enable DeleteMotionTest after fix in #1234 The test was disabled due to a caret positioning bug that was fixed in commit abc123. Verified the test passes consistently. ``` ``` tests: Improve test content readability in ChangeActionTest Replace meaningless "asdfgh" strings with realistic code snippets that better demonstrate the change operation behavior. ``` ``` tests: Document @TestWithoutNeovim reasons in ScrollTest Added description parameter to clarify why scroll tests are excluded from Neovim verification (viewport behavior differs). ``` ## Commands Reference ```bash # Run specific test ./gradlew test --tests "ClassName.testMethod" # Run all tests in a class ./gradlew test --tests "ClassName" # Run tests with Neovim verification ./gradlew test -Dideavim.nvim.test=true --tests "ClassName" # Standard test suite (excludes property and long-running) ./gradlew test -x :tests:property-tests:test -x :tests:long-running-tests:test ``` ## Output When run via workflow, if changes are made, create a PR with: - **Title**: "Tests maintenance: <brief description>" - **Body**: What was checked, issues found, changes made If no changes needed, report what was checked and that everything is fine.
Related Skills
generating-unit-tests
This skill enables Claude to automatically generate comprehensive unit tests from source code. It is triggered when the user requests unit tests, test cases, or test suites for specific files or code snippets. The skill supports multiple testing frameworks including Jest, pytest, JUnit, and others, intelligently detecting the appropriate framework or using one specified by the user. Use this skill when the user asks to "generate tests", "create unit tests", or uses the shortcut "gut" followed by a file path.
managing-snapshot-tests
This skill enables Claude to manage and update snapshot tests using intelligent diff analysis and selective updates. It is triggered when the user asks to analyze snapshot failures, update snapshots, or manage snapshot tests in general. It helps distinguish intentional changes from regressions, selectively update snapshots, and validate snapshot integrity. Use this when the user mentions "snapshot tests", "update snapshots", "snapshot failures", or requests to run "/snapshot-manager" or "/sm". It supports Jest, Vitest, Playwright, and Storybook frameworks.
running-smoke-tests
This skill runs smoke tests to verify critical application functionality. It executes pre-defined test suites that check system health, authentication, core features, and external integrations. Use this skill after deployments, upgrades, or significant configuration changes to ensure the application is operational. Trigger this skill using the terms "smoke test" or "st".
running-load-tests
Create and execute load tests for performance validation using k6, JMeter, and Artillery. Use when validating application performance under load conditions or identifying bottlenecks. Trigger with phrases like "run load test", "create stress test", or "validate performance under load".
tracking-regression-tests
This skill enables Claude to track and run regression tests, ensuring new changes don't break existing functionality. It is triggered when the user asks to "track regression", "run regression tests", or uses the shortcut "reg". The skill helps in maintaining code stability by identifying critical tests, automating their execution, and analyzing the impact of changes. It also provides insights into test history and identifies flaky tests. The skill uses the `regression-test-tracker` plugin.
running-mutation-tests
This skill enables Claude to validate test suite quality by performing mutation testing. It is triggered when the user asks to run mutation tests, analyze test effectiveness, or improve test coverage. The skill introduces code mutations, runs tests against the mutated code, and reports on the "survival rate" of the mutations, indicating the effectiveness of the test suite. Use this skill when the user requests to assess the quality of their tests using mutation testing techniques. Specific trigger terms include "mutation testing", "test effectiveness", "mutation score", and "surviving mutants".
running-integration-tests
This skill enables Claude to run and manage integration test suites. It automates environment setup, database seeding, service orchestration, and cleanup. Use this skill when the user asks to "run integration tests", "execute integration tests", or any command that implies running integration tests for a project, including specifying particular test suites or options like code coverage. It is triggered by phrases such as "/run-integration", "/rit", or requests mentioning "integration tests". The plugin handles database creation, migrations, seeding, and dependent service management.
generating-end-to-end-tests
This skill enables Claude to generate end-to-end (E2E) tests for web applications. It leverages Playwright, Cypress, or Selenium to automate browser interactions and validate user workflows. Use this skill when the user requests to "create E2E tests", "generate end-to-end tests", or asks for help with "browser-based testing". The skill is particularly useful for testing user registration, login flows, shopping cart functionality, and other multi-step processes within a web application. It supports cross-browser testing and can be used to verify the responsiveness of web applications on different devices.
conducting-browser-compatibility-tests
This skill enables cross-browser compatibility testing for web applications using BrowserStack, Selenium Grid, or Playwright. It tests across Chrome, Firefox, Safari, and Edge, identifying browser-specific bugs and ensuring consistent functionality. It is used when a user requests to "test browser compatibility", "run cross-browser tests", or uses the `/browser-test` or `/bt` command to assess web application behavior across different browsers and devices. The skill generates a report detailing compatibility issues and screenshots for visual verification. Activates when you request "conducting browser compatibility tests" functionality.
generating-cli-tests
Generate pytest tests for Typer CLI commands. Includes fixtures (temp_storage, sample_data), CliRunner patterns, confirmation handling (y/n/--force), and edge case coverage. Use when user asks to "write tests for", "test my CLI", "add test coverage", or any CLI + test request.
run-acceptance-tests
Guide for running acceptance tests for a Terraform provider. Use this when asked to run an acceptance test or to run a test with the prefix `TestAcc`.
creating-oracle-to-postgres-migration-integration-tests
Creates integration test cases for .NET data access artifacts during Oracle-to-PostgreSQL database migrations. Generates DB-agnostic xUnit tests with deterministic seed data that validate behavior consistency across both database systems. Use when creating integration tests for a migrated project, generating test coverage for data access layers, or writing Oracle-to-PostgreSQL migration validation tests.