build-diagnostics

When given a blocker:

25 stars

Best use case

build-diagnostics is best used when you need a repeatable AI agent workflow instead of a one-off prompt.

When given a blocker:

Teams using build-diagnostics should expect a more consistent output, faster repeated execution, less prompt rewriting.

When to use this skill

  • You want a reusable workflow that can be run more than once with consistent structure.

When not to use this skill

  • You only need a quick one-off answer and do not need a reusable workflow.
  • You cannot install or maintain the underlying files, dependencies, or repository context.

Installation

Claude Code / Cursor / Codex

$curl -o ~/.claude/skills/build-diagnostics/SKILL.md --create-dirs "https://raw.githubusercontent.com/ComeOnOliver/skillshub/main/skills/aiskillstore/marketplace/cleanexpo/build-diagnostics/SKILL.md"

Manual Installation

  1. Download SKILL.md from GitHub
  2. Place it in .claude/skills/build-diagnostics/SKILL.md inside your project
  3. Restart your AI agent — it will auto-discover the skill

How build-diagnostics Compares

Feature / Agentbuild-diagnosticsStandard Approach
Platform SupportNot specifiedLimited / Varies
Context Awareness High Baseline
Installation ComplexityUnknownN/A

Frequently Asked Questions

What does this skill do?

When given a blocker:

Where can I find the source code?

You can find the source code on GitHub using the link provided at the top of the page.

SKILL.md Source

# Build Diagnostics Agent - Deep Problem Solver

**Purpose**: When Truth Layer finds a blocker, this agent investigates root cause and implements fix.

**Core Principle**: Use all available tools to understand the problem fully before attempting solutions.

## Responsibilities

### 1. Deep Diagnosis

When given a blocker:

```
INPUT: Build fails - Turbopack cannot write manifest
├─ Step 1: Reproduce the error exactly
├─ Step 2: Gather all context (config, logs, environment)
├─ Step 3: Identify root cause (not symptom)
├─ Step 4: Check if known issue (MCP + web search)
├─ Step 5: Propose solution with confidence level
└─ OUTPUT: Detailed diagnosis + fix strategy
```

### 2. Root Cause Analysis

**Don't accept surface symptoms**:
- "Build fails" → Find WHY (missing dirs? permissions? Turbopack bug?)
- "Tests empty" → Why weren't they written? (Blocked? Unclear scope?)
- "Type errors" → Is interface wrong or usage wrong?

**Tools to Use**:
1. **Bash**: Run actual commands, capture full output
2. **Read**: Inspect config files, error logs
3. **Grep**: Search for related issues in codebase
4. **MCP Servers**:
   - Playwright: Test UI behavior
   - Ref documentation: Check API compatibility
   - Web search: Find known issues/solutions

### 3. Fix Implementation

When confident of root cause:

```
1. Create minimal reproducible fix
2. Test locally with same conditions
3. Verify no new problems introduced
4. Document what changed and why
5. Report back to Truth Layer for validation
```

## Workflow

### Phase 1: Investigation (Slow Down Here)

```
Blocker: [description]

REPRODUCE
  - Run exact command: [command]
  - Capture full output: [log]
  - Environment check: [NODE_VERSION, etc]

GATHER CONTEXT
  - Config files reviewed: [list]
  - Related code examined: [files]
  - Error patterns found: [patterns]

ROOT CAUSE ANALYSIS
  - Symptom: [what fails]
  - Actual cause: [why it fails]
  - Confidence: [X%]
  - Affected systems: [what depends on this]
```

### Phase 2: Solution Design

```
PROPOSED FIX
  - Approach: [description]
  - Risk level: [low/medium/high]
  - Alternative solutions: [other approaches]
  - Why this one: [rationale]

VALIDATION PLAN
  - How to test: [specific steps]
  - Success criteria: [measurable]
  - Rollback plan: [if wrong]
```

### Phase 3: Implementation

```
BEFORE FIX STATE
  - [Current configuration/state]

CHANGES
  - [What's being changed]
  - [Why this fixes it]

AFTER FIX STATE
  - [New state]
  - [Verification that it worked]
```

## MCP Integration Strategy

### For Build Issues:
1. **Bash**: Run `npm run build` with full output capture
2. **Read**: Check `next.config.mjs`, `tsconfig.json`, `package.json`
3. **Grep**: Search error messages in codebase
4. **Ref**: Check Next.js/Turbopack docs for compatibility

### For Type Errors:
1. **Bash**: Run `npm run typecheck` to get full error list
2. **Read**: Examine type definitions
3. **Grep**: Find similar patterns that work
4. **Ref**: Check TypeScript docs for type resolution

### For Test Issues:
1. **Read**: Examine test file structure
2. **Bash**: Run tests to see actual failures
3. **Grep**: Find working test examples
4. **Ref**: Check Vitest documentation

## Common Blocker Patterns & Fixes

### Pattern 1: Build Memory Issues
```
SYMPTOM: "Allocation failed - JavaScript heap out of memory"
ROOT CAUSE: Node heap too small for large codebase
FIX: Increase --max-old-space-size in package.json
VALIDATION: npm run build succeeds without memory errors
```

### Pattern 2: Missing Directory Structure
```
SYMPTOM: "Cannot write to path X"
ROOT CAUSE: Parent directories don't exist
FIX: Create directory structure with fs.mkdir recursive
VALIDATION: File write succeeds
```

### Pattern 3: Type Mismatches
```
SYMPTOM: "Type 'X' not assignable to 'Y'"
ROOT CAUSE: Function signature changed, call sites not updated
FIX: Either update interface or map values correctly
VALIDATION: npm run typecheck passes
```

### Pattern 4: Circular Dependencies
```
SYMPTOM: "Cannot find module" or weird import errors
ROOT CAUSE: Files importing each other in circle
FIX: Extract shared code to third module
VALIDATION: Imports resolve cleanly
```

## Confidence Levels

**High Confidence (>80%)**:
- Clear error message pointing to cause
- Solution has been tested before
- Change is isolated and minimal
- No side effects possible

**Medium Confidence (50-80%)**:
- Root cause identified but not 100% certain
- Solution is reasonable but untested
- Might have side effects to monitor
- May need iteration

**Low Confidence (<50%)**:
- Multiple possible causes
- Solution is speculative
- High risk of new problems
- Should escalate for review

## Stop Criteria - When to Escalate

If you hit these, **STOP and ask for help**:
1. Can't reproduce the error
2. Error symptom doesn't match known patterns
3. Fix would require major architecture change
4. Multiple conflicting possible solutions
5. Can't verify fix works without breaking something else

**Report Format for Escalation**:
```
ESCALATION REQUIRED

INVESTIGATION SUMMARY
- What we know: [facts]
- What we tried: [attempts]
- Why it failed: [reasons]

POSSIBLE CAUSES (ranked by likelihood)
1. [X] - confidence [Y]%
2. [X] - confidence [Y]%

NEXT STEPS (need human input on)
- [Decision needed]
- [Preference between options]
- [Architectural guidance]
```

## Success Metrics

✅ Every blocker has root cause identified
✅ Fixes are minimal and isolated
✅ All fixes verified before returning to Truth Layer
✅ No new problems introduced
✅ Time: Thorough investigation beats rushed fixes

## Anti-Patterns (What We Stop)

❌ "Let's just reboot and see if it helps"
❌ "I'll try random stuff until something works"
❌ "This error is probably not related to my change"
❌ Giving up and claiming it's not possible
❌ Making changes without understanding impact

---

**Key Mantra**:
> "We don't fix symptoms. We fix root causes.
> And we verify before we claim victory."

Related Skills

vertex-agent-builder

25
from ComeOnOliver/skillshub

Build and deploy production-ready generative AI agents using Vertex AI, Gemini models, and Google Cloud infrastructure with RAG, function calling, and multi-modal capabilities

test-data-builder

25
from ComeOnOliver/skillshub

Test Data Builder - Auto-activating skill for Test Automation. Triggers on: test data builder, test data builder Part of the Test Automation skill category.

building-terraform-modules

25
from ComeOnOliver/skillshub

This skill empowers Claude to build reusable Terraform modules based on user specifications. It leverages the terraform-module-builder plugin to generate production-ready, well-documented Terraform module code, incorporating best practices for security, scalability, and multi-platform support. Use this skill when the user requests to create a new Terraform module, generate Terraform configuration, or needs help structuring infrastructure as code using Terraform. The trigger terms include "create Terraform module," "generate Terraform configuration," "Terraform module code," and "infrastructure as code."

sklearn-pipeline-builder

25
from ComeOnOliver/skillshub

Sklearn Pipeline Builder - Auto-activating skill for ML Training. Triggers on: sklearn pipeline builder, sklearn pipeline builder Part of the ML Training skill category.

sam-template-builder

25
from ComeOnOliver/skillshub

Sam Template Builder - Auto-activating skill for AWS Skills. Triggers on: sam template builder, sam template builder Part of the AWS Skills skill category.

building-recommendation-systems

25
from ComeOnOliver/skillshub

This skill empowers Claude to construct recommendation systems using collaborative filtering, content-based filtering, or hybrid approaches. It analyzes user preferences, item features, and interaction data to generate personalized recommendations. Use this skill when the user requests to build a recommendation engine, needs help with collaborative filtering, wants to implement content-based filtering, or seeks to rank items based on relevance for a specific user or group of users. It is triggered by requests involving "recommendations", "collaborative filtering", "content-based filtering", "ranking items", or "building a recommender".

prefect-flow-builder

25
from ComeOnOliver/skillshub

Prefect Flow Builder - Auto-activating skill for Data Pipelines. Triggers on: prefect flow builder, prefect flow builder Part of the Data Pipelines skill category.

building-neural-networks

25
from ComeOnOliver/skillshub

This skill allows Claude to construct and configure neural network architectures using the neural-network-builder plugin. It should be used when the user requests the creation of a new neural network, modification of an existing one, or assistance with defining the layers, parameters, and training process. The skill is triggered by requests involving terms like "build a neural network," "define network architecture," "configure layers," or specific mentions of neural network types (e.g., "CNN," "RNN," "transformer").

graphql-mutation-builder

25
from ComeOnOliver/skillshub

Graphql Mutation Builder - Auto-activating skill for API Development. Triggers on: graphql mutation builder, graphql mutation builder Part of the API Development skill category.

building-gitops-workflows

25
from ComeOnOliver/skillshub

This skill enables Claude to construct GitOps workflows using ArgoCD and Flux. It is designed to generate production-ready configurations, implement best practices, and ensure a security-first approach for Kubernetes deployments. Use this skill when the user explicitly requests "GitOps workflow", "ArgoCD", "Flux", or asks for help with setting up a continuous delivery pipeline using GitOps principles. The skill will generate the necessary configuration files and setup code based on the user's specific requirements and infrastructure.

funnel-analysis-builder

25
from ComeOnOliver/skillshub

Funnel Analysis Builder - Auto-activating skill for Data Analytics. Triggers on: funnel analysis builder, funnel analysis builder Part of the Data Analytics skill category.

form-builder-helper

25
from ComeOnOliver/skillshub

Form Builder Helper - Auto-activating skill for Business Automation. Triggers on: form builder helper, form builder helper Part of the Business Automation skill category.