golang-testing
Go testing patterns including table-driven tests, subtests, benchmarks, fuzzing, and test coverage. Follows TDD methodology with idiomatic Go practices.
Best use case
golang-testing is best used when you need a repeatable AI agent workflow instead of a one-off prompt.
Go testing patterns including table-driven tests, subtests, benchmarks, fuzzing, and test coverage. Follows TDD methodology with idiomatic Go practices.
Teams using golang-testing should expect a more consistent output, faster repeated execution, less prompt rewriting.
When to use this skill
- You want a reusable workflow that can be run more than once with consistent structure.
When not to use this skill
- You only need a quick one-off answer and do not need a reusable workflow.
- You cannot install or maintain the underlying files, dependencies, or repository context.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/golang-testing/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How golang-testing Compares
| Feature / Agent | golang-testing | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | Unknown | N/A |
Frequently Asked Questions
What does this skill do?
Go testing patterns including table-driven tests, subtests, benchmarks, fuzzing, and test coverage. Follows TDD methodology with idiomatic Go practices.
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
SKILL.md Source
# Go Testing Patterns
Comprehensive Go testing patterns for writing reliable, maintainable tests following TDD methodology.
## When to Activate
- Writing new Go functions or methods
- Adding test coverage to existing code
- Creating benchmarks for performance-critical code
- Implementing fuzz tests for input validation
- Following TDD workflow in Go projects
## TDD Workflow for Go
### The RED-GREEN-REFACTOR Cycle
```
RED → Write a failing test first
GREEN → Write minimal code to pass the test
REFACTOR → Improve code while keeping tests green
REPEAT → Continue with next requirement
```
### Step-by-Step TDD in Go
```go
// Step 1: Define the interface/signature
// calculator.go
package calculator
func Add(a, b int) int {
panic("not implemented") // Placeholder
}
// Step 2: Write failing test (RED)
// calculator_test.go
package calculator
import "testing"
func TestAdd(t *testing.T) {
got := Add(2, 3)
want := 5
if got != want {
t.Errorf("Add(2, 3) = %d; want %d", got, want)
}
}
// Step 3: Run test - verify FAIL
// $ go test
// --- FAIL: TestAdd (0.00s)
// panic: not implemented
// Step 4: Implement minimal code (GREEN)
func Add(a, b int) int {
return a + b
}
// Step 5: Run test - verify PASS
// $ go test
// PASS
// Step 6: Refactor if needed, verify tests still pass
```
## Table-Driven Tests
The standard pattern for Go tests. Enables comprehensive coverage with minimal code.
```go
func TestAdd(t *testing.T) {
tests := []struct {
name string
a, b int
expected int
}{
{"positive numbers", 2, 3, 5},
{"negative numbers", -1, -2, -3},
{"zero values", 0, 0, 0},
{"mixed signs", -1, 1, 0},
{"large numbers", 1000000, 2000000, 3000000},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got := Add(tt.a, tt.b)
if got != tt.expected {
t.Errorf("Add(%d, %d) = %d; want %d",
tt.a, tt.b, got, tt.expected)
}
})
}
}
```
### Table-Driven Tests with Error Cases
```go
func TestParseConfig(t *testing.T) {
tests := []struct {
name string
input string
want *Config
wantErr bool
}{
{
name: "valid config",
input: `{"host": "localhost", "port": 8080}`,
want: &Config{Host: "localhost", Port: 8080},
},
{
name: "invalid JSON",
input: `{invalid}`,
wantErr: true,
},
{
name: "empty input",
input: "",
wantErr: true,
},
{
name: "minimal config",
input: `{}`,
want: &Config{}, // Zero value config
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got, err := ParseConfig(tt.input)
if tt.wantErr {
if err == nil {
t.Error("expected error, got nil")
}
return
}
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
if !reflect.DeepEqual(got, tt.want) {
t.Errorf("got %+v; want %+v", got, tt.want)
}
})
}
}
```
## Subtests and Sub-benchmarks
### Organizing Related Tests
```go
func TestUser(t *testing.T) {
// Setup shared by all subtests
db := setupTestDB(t)
t.Run("Create", func(t *testing.T) {
user := &User{Name: "Alice"}
err := db.CreateUser(user)
if err != nil {
t.Fatalf("CreateUser failed: %v", err)
}
if user.ID == "" {
t.Error("expected user ID to be set")
}
})
t.Run("Get", func(t *testing.T) {
user, err := db.GetUser("alice-id")
if err != nil {
t.Fatalf("GetUser failed: %v", err)
}
if user.Name != "Alice" {
t.Errorf("got name %q; want %q", user.Name, "Alice")
}
})
t.Run("Update", func(t *testing.T) {
// ...
})
t.Run("Delete", func(t *testing.T) {
// ...
})
}
```
### Parallel Subtests
```go
func TestParallel(t *testing.T) {
tests := []struct {
name string
input string
}{
{"case1", "input1"},
{"case2", "input2"},
{"case3", "input3"},
}
for _, tt := range tests {
tt := tt // Capture range variable
t.Run(tt.name, func(t *testing.T) {
t.Parallel() // Run subtests in parallel
result := Process(tt.input)
// assertions...
_ = result
})
}
}
```
## Test Helpers
### Helper Functions
```go
func setupTestDB(t *testing.T) *sql.DB {
t.Helper() // Marks this as a helper function
db, err := sql.Open("sqlite3", ":memory:")
if err != nil {
t.Fatalf("failed to open database: %v", err)
}
// Cleanup when test finishes
t.Cleanup(func() {
db.Close()
})
// Run migrations
if _, err := db.Exec(schema); err != nil {
t.Fatalf("failed to create schema: %v", err)
}
return db
}
func assertNoError(t *testing.T, err error) {
t.Helper()
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
}
func assertEqual[T comparable](t *testing.T, got, want T) {
t.Helper()
if got != want {
t.Errorf("got %v; want %v", got, want)
}
}
```
### Temporary Files and Directories
```go
func TestFileProcessing(t *testing.T) {
// Create temp directory - automatically cleaned up
tmpDir := t.TempDir()
// Create test file
testFile := filepath.Join(tmpDir, "test.txt")
err := os.WriteFile(testFile, []byte("test content"), 0644)
if err != nil {
t.Fatalf("failed to create test file: %v", err)
}
// Run test
result, err := ProcessFile(testFile)
if err != nil {
t.Fatalf("ProcessFile failed: %v", err)
}
// Assert...
_ = result
}
```
## Golden Files
Testing against expected output files stored in `testdata/`.
```go
var update = flag.Bool("update", false, "update golden files")
func TestRender(t *testing.T) {
tests := []struct {
name string
input Template
}{
{"simple", Template{Name: "test"}},
{"complex", Template{Name: "test", Items: []string{"a", "b"}}},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got := Render(tt.input)
golden := filepath.Join("testdata", tt.name+".golden")
if *update {
// Update golden file: go test -update
err := os.WriteFile(golden, got, 0644)
if err != nil {
t.Fatalf("failed to update golden file: %v", err)
}
}
want, err := os.ReadFile(golden)
if err != nil {
t.Fatalf("failed to read golden file: %v", err)
}
if !bytes.Equal(got, want) {
t.Errorf("output mismatch:\ngot:\n%s\nwant:\n%s", got, want)
}
})
}
}
```
## Mocking with Interfaces
### Interface-Based Mocking
```go
// Define interface for dependencies
type UserRepository interface {
GetUser(id string) (*User, error)
SaveUser(user *User) error
}
// Production implementation
type PostgresUserRepository struct {
db *sql.DB
}
func (r *PostgresUserRepository) GetUser(id string) (*User, error) {
// Real database query
}
// Mock implementation for tests
type MockUserRepository struct {
GetUserFunc func(id string) (*User, error)
SaveUserFunc func(user *User) error
}
func (m *MockUserRepository) GetUser(id string) (*User, error) {
return m.GetUserFunc(id)
}
func (m *MockUserRepository) SaveUser(user *User) error {
return m.SaveUserFunc(user)
}
// Test using mock
func TestUserService(t *testing.T) {
mock := &MockUserRepository{
GetUserFunc: func(id string) (*User, error) {
if id == "123" {
return &User{ID: "123", Name: "Alice"}, nil
}
return nil, ErrNotFound
},
}
service := NewUserService(mock)
user, err := service.GetUserProfile("123")
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
if user.Name != "Alice" {
t.Errorf("got name %q; want %q", user.Name, "Alice")
}
}
```
## Benchmarks
### Basic Benchmarks
```go
func BenchmarkProcess(b *testing.B) {
data := generateTestData(1000)
b.ResetTimer() // Don't count setup time
for i := 0; i < b.N; i++ {
Process(data)
}
}
// Run: go test -bench=BenchmarkProcess -benchmem
// Output: BenchmarkProcess-8 10000 105234 ns/op 4096 B/op 10 allocs/op
```
### Benchmark with Different Sizes
```go
func BenchmarkSort(b *testing.B) {
sizes := []int{100, 1000, 10000, 100000}
for _, size := range sizes {
b.Run(fmt.Sprintf("size=%d", size), func(b *testing.B) {
data := generateRandomSlice(size)
b.ResetTimer()
for i := 0; i < b.N; i++ {
// Make a copy to avoid sorting already sorted data
tmp := make([]int, len(data))
copy(tmp, data)
sort.Ints(tmp)
}
})
}
}
```
### Memory Allocation Benchmarks
```go
func BenchmarkStringConcat(b *testing.B) {
parts := []string{"hello", "world", "foo", "bar", "baz"}
b.Run("plus", func(b *testing.B) {
for i := 0; i < b.N; i++ {
var s string
for _, p := range parts {
s += p
}
_ = s
}
})
b.Run("builder", func(b *testing.B) {
for i := 0; i < b.N; i++ {
var sb strings.Builder
for _, p := range parts {
sb.WriteString(p)
}
_ = sb.String()
}
})
b.Run("join", func(b *testing.B) {
for i := 0; i < b.N; i++ {
_ = strings.Join(parts, "")
}
})
}
```
## Fuzzing (Go 1.18+)
### Basic Fuzz Test
```go
func FuzzParseJSON(f *testing.F) {
// Add seed corpus
f.Add(`{"name": "test"}`)
f.Add(`{"count": 123}`)
f.Add(`[]`)
f.Add(`""`)
f.Fuzz(func(t *testing.T, input string) {
var result map[string]interface{}
err := json.Unmarshal([]byte(input), &result)
if err != nil {
// Invalid JSON is expected for random input
return
}
// If parsing succeeded, re-encoding should work
_, err = json.Marshal(result)
if err != nil {
t.Errorf("Marshal failed after successful Unmarshal: %v", err)
}
})
}
// Run: go test -fuzz=FuzzParseJSON -fuzztime=30s
```
### Fuzz Test with Multiple Inputs
```go
func FuzzCompare(f *testing.F) {
f.Add("hello", "world")
f.Add("", "")
f.Add("abc", "abc")
f.Fuzz(func(t *testing.T, a, b string) {
result := Compare(a, b)
// Property: Compare(a, a) should always equal 0
if a == b && result != 0 {
t.Errorf("Compare(%q, %q) = %d; want 0", a, b, result)
}
// Property: Compare(a, b) and Compare(b, a) should have opposite signs
reverse := Compare(b, a)
if (result > 0 && reverse >= 0) || (result < 0 && reverse <= 0) {
if result != 0 || reverse != 0 {
t.Errorf("Compare(%q, %q) = %d, Compare(%q, %q) = %d; inconsistent",
a, b, result, b, a, reverse)
}
}
})
}
```
## Test Coverage
### Running Coverage
```bash
# Basic coverage
go test -cover ./...
# Generate coverage profile
go test -coverprofile=coverage.out ./...
# View coverage in browser
go tool cover -html=coverage.out
# View coverage by function
go tool cover -func=coverage.out
# Coverage with race detection
go test -race -coverprofile=coverage.out ./...
```
### Coverage Targets
| Code Type | Target |
|-----------|--------|
| Critical business logic | 100% |
| Public APIs | 90%+ |
| General code | 80%+ |
| Generated code | Exclude |
### Excluding Generated Code from Coverage
```go
//go:generate mockgen -source=interface.go -destination=mock_interface.go
// In coverage profile, exclude with build tags:
// go test -cover -tags=!generate ./...
```
## HTTP Handler Testing
```go
func TestHealthHandler(t *testing.T) {
// Create request
req := httptest.NewRequest(http.MethodGet, "/health", nil)
w := httptest.NewRecorder()
// Call handler
HealthHandler(w, req)
// Check response
resp := w.Result()
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
t.Errorf("got status %d; want %d", resp.StatusCode, http.StatusOK)
}
body, _ := io.ReadAll(resp.Body)
if string(body) != "OK" {
t.Errorf("got body %q; want %q", body, "OK")
}
}
func TestAPIHandler(t *testing.T) {
tests := []struct {
name string
method string
path string
body string
wantStatus int
wantBody string
}{
{
name: "get user",
method: http.MethodGet,
path: "/users/123",
wantStatus: http.StatusOK,
wantBody: `{"id":"123","name":"Alice"}`,
},
{
name: "not found",
method: http.MethodGet,
path: "/users/999",
wantStatus: http.StatusNotFound,
},
{
name: "create user",
method: http.MethodPost,
path: "/users",
body: `{"name":"Bob"}`,
wantStatus: http.StatusCreated,
},
}
handler := NewAPIHandler()
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
var body io.Reader
if tt.body != "" {
body = strings.NewReader(tt.body)
}
req := httptest.NewRequest(tt.method, tt.path, body)
req.Header.Set("Content-Type", "application/json")
w := httptest.NewRecorder()
handler.ServeHTTP(w, req)
if w.Code != tt.wantStatus {
t.Errorf("got status %d; want %d", w.Code, tt.wantStatus)
}
if tt.wantBody != "" && w.Body.String() != tt.wantBody {
t.Errorf("got body %q; want %q", w.Body.String(), tt.wantBody)
}
})
}
}
```
## Testing Commands
```bash
# Run all tests
go test ./...
# Run tests with verbose output
go test -v ./...
# Run specific test
go test -run TestAdd ./...
# Run tests matching pattern
go test -run "TestUser/Create" ./...
# Run tests with race detector
go test -race ./...
# Run tests with coverage
go test -cover -coverprofile=coverage.out ./...
# Run short tests only
go test -short ./...
# Run tests with timeout
go test -timeout 30s ./...
# Run benchmarks
go test -bench=. -benchmem ./...
# Run fuzzing
go test -fuzz=FuzzParse -fuzztime=30s ./...
# Count test runs (for flaky test detection)
go test -count=10 ./...
```
## Best Practices
**DO:**
- Write tests FIRST (TDD)
- Use table-driven tests for comprehensive coverage
- Test behavior, not implementation
- Use `t.Helper()` in helper functions
- Use `t.Parallel()` for independent tests
- Clean up resources with `t.Cleanup()`
- Use meaningful test names that describe the scenario
**DON'T:**
- Test private functions directly (test through public API)
- Use `time.Sleep()` in tests (use channels or conditions)
- Ignore flaky tests (fix or remove them)
- Mock everything (prefer integration tests when possible)
- Skip error path testing
## Integration with CI/CD
```yaml
# GitHub Actions example
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-go@v5
with:
go-version: '1.22'
- name: Run tests
run: go test -race -coverprofile=coverage.out ./...
- name: Check coverage
run: |
go tool cover -func=coverage.out | grep total | awk '{print $3}' | \
awk -F'%' '{if ($1 < 80) exit 1}'
```
**Remember**: Tests are documentation. They show how your code is meant to be used. Write them clearly and keep them up to date.Related Skills
python-testing
Python testing strategies using pytest, TDD methodology, fixtures, mocking, parametrization, and coverage requirements.
property-based-testing
Property-based testing (PBT) patterns with fast-check (JS/TS), Hypothesis (Python), and gopter (Go). Generate random inputs, define invariants, shrink failures to minimal cases. Adapted from Trail of Bits. Use when testing pure functions, parsers, serializers, state machines, or any code where example-based tests miss edge cases.
performance-testing
Load testing with k6/Artillery, response time thresholds, memory leak detection, N+1 query detection, and CI integration.
load-testing-patterns
k6 script templates, load profiles, response time thresholds, SLO validation, and performance testing strategies.
golang-patterns
Idiomatic Go patterns, best practices, and conventions for building robust, efficient, and maintainable Go applications.
contract-testing-patterns
Pact consumer-driven contracts, provider verification, schema evolution
accessibility-testing
axe-core integration, WCAG 2.2 AA checklist, keyboard navigation testing, screen reader testing, and ARIA pattern validation.
workflow-router
Goal-based workflow orchestration - routes tasks to specialist agents based on user goals
wiring
Wiring Verification
websocket-patterns
Connection management, room patterns, reconnection strategies, message buffering, and binary protocol design.
visual-verdict
Screenshot comparison QA for frontend development. Takes a screenshot of the current implementation, scores it across multiple visual dimensions, and returns a structured PASS/REVISE/FAIL verdict with concrete fixes. Use when implementing UI from a design reference or verifying visual correctness.
verification-loop
Comprehensive verification system covering build, types, lint, tests, security, and diff review before a PR.