go-testing
Use when writing, reviewing, or improving Go test code — including table-driven tests, subtests, parallel tests, test helpers, test doubles, and assertions with cmp.Diff. Also use when a user asks to write a test for a Go function, even if they don't mention specific patterns like table-driven tests or subtests. Does not cover benchmark performance testing (see go-performance).
Best use case
go-testing is best used when you need a repeatable AI agent workflow instead of a one-off prompt.
Use when writing, reviewing, or improving Go test code — including table-driven tests, subtests, parallel tests, test helpers, test doubles, and assertions with cmp.Diff. Also use when a user asks to write a test for a Go function, even if they don't mention specific patterns like table-driven tests or subtests. Does not cover benchmark performance testing (see go-performance).
Teams using go-testing should expect a more consistent output, faster repeated execution, less prompt rewriting.
When to use this skill
- You want a reusable workflow that can be run more than once with consistent structure.
When not to use this skill
- You only need a quick one-off answer and do not need a reusable workflow.
- You cannot install or maintain the underlying files, dependencies, or repository context.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/go-testing/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How go-testing Compares
| Feature / Agent | go-testing | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | Unknown | N/A |
Frequently Asked Questions
What does this skill do?
Use when writing, reviewing, or improving Go test code — including table-driven tests, subtests, parallel tests, test helpers, test doubles, and assertions with cmp.Diff. Also use when a user asks to write a test for a Go function, even if they don't mention specific patterns like table-driven tests or subtests. Does not cover benchmark performance testing (see go-performance).
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
Related Guides
SKILL.md Source
# Go Testing
## Quick Reference
| Pattern | Use When |
|---------|----------|
| `t.Error` | Default — report failure, keep running |
| `t.Fatal` | Setup failed or continuing is meaningless |
| `cmp.Diff` | Comparing structs, slices, maps, protos |
| Table-driven | Many cases share identical logic |
| Subtests | Need filtering, parallel execution, or naming |
| `t.Helper()` | Any test helper function (call as first statement) |
| `t.Cleanup()` | Teardown in helpers instead of defer |
---
## Useful Test Failures
> **Normative**: Test failures must be diagnosable without reading the test
> source.
Every failure message must include: function name, inputs, actual (got), and
expected (want). Use the format `YourFunc(%v) = %v, want %v`.
```go
// Good:
t.Errorf("Add(2, 3) = %d, want %d", got, 5)
// Bad: Missing function name and inputs
t.Errorf("got %d, want %d", got, 5)
```
Always print got before want: `got %v, want %v` — never reversed.
---
## No Assertion Libraries
> **Normative**: Do not use assertion libraries. Use `cmp.Diff` for complex
> comparisons.
```go
if diff := cmp.Diff(want, got); diff != "" {
t.Errorf("GetPost() mismatch (-want +got):\n%s", diff)
}
```
For protocol buffers, add `protocmp.Transform()` as a cmp option. Always
include the direction key `(-want +got)` in diff messages. Avoid comparing
JSON/serialized output — compare semantically instead.
> Read [references/TEST-HELPERS.md](references/TEST-HELPERS.md) when writing
> custom comparison helpers or domain-specific test utilities.
---
## t.Error vs t.Fatal
> **Normative**: Use `t.Error` by default to report all failures in one run.
> Use `t.Fatal` only when continuing is impossible.
**Choose `t.Fatal` when:**
- Setup fails (DB connection, file load)
- The next assertion depends on the previous one succeeding (e.g., decode after
encode)
**Never call `t.Fatal`/`t.FailNow` from a goroutine** other than the test
goroutine — use `t.Error` instead.
> Read [references/TEST-HELPERS.md](references/TEST-HELPERS.md) when writing
> helpers that need to choose between t.Error and t.Fatal, or for detailed
> examples of both.
---
## Table-Driven Tests
> See `assets/table-test-template.go` when scaffolding a new table-driven test and need the canonical struct, loop, and subtest layout.
> **Advisory**: Use table-driven tests when many cases share identical logic.
**Use table tests when:** all cases run the same code path with no conditional
setup, mocking, or assertions. A single `shouldErr` bool is acceptable.
**Don't use table tests when:** cases need complex setup, conditional mocking,
or multiple branches — write separate test functions instead.
**Key rules:**
- Use field names when cases span many lines or have same-type adjacent fields
- Include inputs in failure messages — never identify rows by index
> Read [references/TABLE-DRIVEN-TESTS.md](references/TABLE-DRIVEN-TESTS.md)
> when writing table-driven tests, subtests, or parallel tests.
> **Validation**: After generating or modifying tests, run `go test -run TestXxx -v` to verify the tests compile and pass. Fix any compilation errors before proceeding.
---
## Test Helpers
> **Normative**: Test helpers must call `t.Helper()` first and use `t.Cleanup()`
> for teardown.
```go
func setupTestDB(t *testing.T) *sql.DB {
t.Helper()
db, err := sql.Open("sqlite3", ":memory:")
if err != nil {
t.Fatalf("Could not open database: %v", err)
}
t.Cleanup(func() { db.Close() })
return db
}
```
> Read [references/TEST-HELPERS.md](references/TEST-HELPERS.md) when writing
> test helpers, cleanup functions, or custom comparison utilities.
---
## Test Error Semantics
> **Advisory**: Test error semantics, not error message strings.
```go
// Bad: Brittle string comparison
if err.Error() != "invalid input" { ... }
// Good: Semantic check
if !errors.Is(err, ErrInvalidInput) { ... }
```
For simple presence checks when specific semantics don't matter:
```go
if gotErr := err != nil; gotErr != tt.wantErr {
t.Errorf("f(%v) error = %v, want error presence = %t", tt.input, err, tt.wantErr)
}
```
---
## Test Organization
> Read [references/TEST-ORGANIZATION.md](references/TEST-ORGANIZATION.md) when
> working with test doubles, choosing test package placement, or scoping test
> setup.
> Read [references/VALIDATION-APIS.md](references/VALIDATION-APIS.md) when
> designing reusable test validation functions.
---
## Integration Testing
> Read [references/INTEGRATION.md](references/INTEGRATION.md) when writing
> TestMain, acceptance tests, or tests that need real HTTP/RPC transports.
---
## Available Scripts
- **`scripts/gen-table-test.sh`** — Generates a table-driven test scaffold
```bash
bash scripts/gen-table-test.sh ParseConfig config > config/parse_config_test.go
bash scripts/gen-table-test.sh --parallel ParseConfig config # with t.Parallel()
bash scripts/gen-table-test.sh --output config/parse_config_test.go ParseConfig config
```
---
## Related Skills
- **Error testing**: See [go-error-handling](../go-error-handling/SKILL.md) when testing error semantics with `errors.Is`/`errors.As` or sentinel errors
- **Interface mocking**: See [go-interfaces](../go-interfaces/SKILL.md) when creating test doubles by implementing interfaces at the consumer side
- **Naming test functions**: See [go-naming](../go-naming/SKILL.md) when naming test functions, subtests, or test helper utilities
- **Linter integration**: See [go-linting](../go-linting/SKILL.md) when running linters alongside tests in CI or pre-commit hooksRelated Skills
performing-visual-regression-testing
This skill enables Claude to execute visual regression tests using tools like Percy, Chromatic, and BackstopJS. It captures screenshots, compares them against baselines, and analyzes visual differences to identify unintended UI changes. Use this skill when the user requests visual testing, UI change verification, or regression testing for a web application or component. Trigger phrases include "visual test," "UI regression," "check visual changes," or "/visual-test".
performing-security-testing
This skill automates security vulnerability testing. It is triggered when the user requests security assessments, penetration tests, or vulnerability scans. The skill covers OWASP Top 10 vulnerabilities, SQL injection, XSS, CSRF, authentication issues, and authorization flaws. Use this skill when the user mentions "security test", "vulnerability scan", "OWASP", "SQL injection", "XSS", "CSRF", "authentication", or "authorization" in the context of application or API testing.
performance-testing
This skill enables Claude to design, execute, and analyze performance tests using the performance-test-suite plugin. It is activated when the user requests load testing, stress testing, spike testing, or endurance testing, and when discussing performance metrics such as response time, throughput, and error rates. It identifies performance bottlenecks related to CPU, memory, database, or network issues. The plugin provides comprehensive reporting, including percentiles, graphs, and recommendations.
performing-penetration-testing
This skill enables automated penetration testing of web applications. It uses the penetration-tester plugin to identify vulnerabilities, including OWASP Top 10 threats, and suggests exploitation techniques. Use this skill when the user requests a "penetration test", "pentest", "vulnerability assessment", or asks to "exploit" a web application. It provides comprehensive reporting on identified security flaws.
automating-mobile-app-testing
This skill enables automated testing of mobile applications on iOS and Android platforms using frameworks like Appium, Detox, XCUITest, and Espresso. It generates end-to-end tests, sets up page object models, and handles platform-specific elements. Use this skill when the user requests mobile app testing, test automation for iOS or Android, or needs assistance with setting up device farms and simulators. The skill is triggered by terms like "mobile testing", "appium", "detox", "xcuitest", "espresso", "android test", "ios test".
load-testing-apis
Execute comprehensive load and stress testing to validate API performance and scalability. Use when validating API performance under load. Trigger with phrases like "load test the API", "stress test API", or "benchmark API performance".
testing-load-balancers
This skill enables Claude to test load balancing strategies. It validates traffic distribution across backend servers, tests failover scenarios when servers become unavailable, verifies sticky sessions, and assesses health check functionality. Use this skill when the user asks to "test load balancer", "validate traffic distribution", "test failover", "verify sticky sessions", or "test health checks". It is specifically designed for testing load balancing configurations using the `load-balancer-tester` plugin.
managing-database-testing
This skill manages database testing by generating test data, wrapping tests in transactions, and validating database schemas. It is used to create robust and reliable database interactions. Claude uses this skill when the user requests database testing utilities, including test data generation, transaction management, schema validation, or migration testing. Trigger this skill by mentioning "database testing," "test data factories," "transaction rollback," "schema validation," or using the `/db-test` or `/dbt` commands.
backtesting-trading-strategies
Backtest crypto and traditional trading strategies against historical data. Calculates performance metrics (Sharpe, Sortino, max drawdown), generates equity curves, and optimizes strategy parameters. Use when user wants to test a trading strategy, validate signals, or compare approaches. Trigger with phrases like "backtest strategy", "test trading strategy", "historical performance", "simulate trades", "optimize parameters", or "validate signals".
api-testing-helper
Api Testing Helper - Auto-activating skill for API Development. Triggers on: api testing helper, api testing helper Part of the API Development skill category.
automating-api-testing
This skill automates API endpoint testing, including request generation, validation, and comprehensive test coverage for REST and GraphQL APIs. It is used when the user requests API testing, contract testing, or validation against OpenAPI specifications. The skill analyzes API endpoints and generates test suites covering CRUD operations, authentication flows, and security aspects. It also validates response status codes, headers, and body structure. Use this skill when the user mentions "API testing", "REST API tests", "GraphQL API tests", "contract tests", or "OpenAPI validation".
planning-oracle-to-postgres-migration-integration-testing
Creates an integration testing plan for .NET data access artifacts during Oracle-to-PostgreSQL database migrations. Analyzes a single project to identify repositories, DAOs, and service layers that interact with the database, then produces a structured testing plan. Use when planning integration test coverage for a migrated project, identifying which data access methods need tests, or preparing for Oracle-to-PostgreSQL migration validation.