conductor-new-track
Create a new track with specification and phased implementation plan
Best use case
conductor-new-track is best used when you need a repeatable AI agent workflow instead of a one-off prompt.
Create a new track with specification and phased implementation plan
Teams using conductor-new-track should expect a more consistent output, faster repeated execution, less prompt rewriting.
When to use this skill
- You want a reusable workflow that can be run more than once with consistent structure.
When not to use this skill
- You only need a quick one-off answer and do not need a reusable workflow.
- You cannot install or maintain the underlying files, dependencies, or repository context.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/conductor-new-track/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How conductor-new-track Compares
| Feature / Agent | conductor-new-track | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | Unknown | N/A |
Frequently Asked Questions
What does this skill do?
Create a new track with specification and phased implementation plan
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
SKILL.md Source
# New Track
Create a new track (feature, bug fix, chore, or refactor) with a detailed specification and phased implementation plan.
## Use this skill when
- Working on new track tasks or workflows
- Needing guidance, best practices, or checklists for new track
## Do not use this skill when
- The task is unrelated to new track
- You need a different domain or tool outside this scope
## Instructions
- Clarify goals, constraints, and required inputs.
- Apply relevant best practices and validate outcomes.
- Provide actionable steps and verification.
- If detailed examples are required, open `resources/implementation-playbook.md`.
## Pre-flight Checks
1. Verify Conductor is initialized:
- Check `conductor/product.md` exists
- Check `conductor/tech-stack.md` exists
- Check `conductor/workflow.md` exists
- If missing: Display error and suggest running `/conductor:setup` first
2. Load context files:
- Read `conductor/product.md` for product context
- Read `conductor/tech-stack.md` for technical context
- Read `conductor/workflow.md` for TDD/commit preferences
## Track Classification
Determine track type based on description or ask user:
```
What type of track is this?
1. Feature - New functionality
2. Bug - Fix for existing issue
3. Chore - Maintenance, dependencies, config
4. Refactor - Code improvement without behavior change
```
## Interactive Specification Gathering
**CRITICAL RULES:**
- Ask ONE question per turn
- Wait for user response before proceeding
- Tailor questions based on track type
- Maximum 6 questions total
### For Feature Tracks
**Q1: Feature Summary**
```
Describe the feature in 1-2 sentences.
[If argument provided, confirm: "You want to: {argument}. Is this correct?"]
```
**Q2: User Story**
```
Who benefits and how?
Format: As a [user type], I want to [action] so that [benefit].
```
**Q3: Acceptance Criteria**
```
What must be true for this feature to be complete?
List 3-5 acceptance criteria (one per line):
```
**Q4: Dependencies**
```
Does this depend on any existing code, APIs, or other tracks?
1. No dependencies
2. Depends on existing code (specify)
3. Depends on incomplete track (specify)
```
**Q5: Scope Boundaries**
```
What is explicitly OUT of scope for this track?
(Helps prevent scope creep)
```
**Q6: Technical Considerations (optional)**
```
Any specific technical approach or constraints?
(Press enter to skip)
```
### For Bug Tracks
**Q1: Bug Summary**
```
What is broken?
[If argument provided, confirm]
```
**Q2: Steps to Reproduce**
```
How can this bug be reproduced?
List steps:
```
**Q3: Expected vs Actual Behavior**
```
What should happen vs what actually happens?
```
**Q4: Affected Areas**
```
What parts of the system are affected?
```
**Q5: Root Cause Hypothesis (optional)**
```
Any hypothesis about the cause?
(Press enter to skip)
```
### For Chore/Refactor Tracks
**Q1: Task Summary**
```
What needs to be done?
[If argument provided, confirm]
```
**Q2: Motivation**
```
Why is this work needed?
```
**Q3: Success Criteria**
```
How will we know this is complete?
```
**Q4: Risk Assessment**
```
What could go wrong? Any risky changes?
```
## Track ID Generation
Generate track ID in format: `{shortname}_{YYYYMMDD}`
- Extract shortname from feature/bug summary (2-3 words, lowercase, hyphenated)
- Use current date
- Example: `user-auth_20250115`, `nav-bug_20250115`
Validate uniqueness:
- Check `conductor/tracks.md` for existing IDs
- If collision, append counter: `user-auth_20250115_2`
## Specification Generation
Create `conductor/tracks/{trackId}/spec.md`:
```markdown
# Specification: {Track Title}
**Track ID:** {trackId}
**Type:** {Feature|Bug|Chore|Refactor}
**Created:** {YYYY-MM-DD}
**Status:** Draft
## Summary
{1-2 sentence summary}
## Context
{Product context from product.md relevant to this track}
## User Story (for features)
As a {user}, I want to {action} so that {benefit}.
## Problem Description (for bugs)
{Bug description, steps to reproduce}
## Acceptance Criteria
- [ ] {Criterion 1}
- [ ] {Criterion 2}
- [ ] {Criterion 3}
## Dependencies
{List dependencies or "None"}
## Out of Scope
{Explicit exclusions}
## Technical Notes
{Technical considerations or "None specified"}
---
_Generated by Conductor. Review and edit as needed._
```
## User Review of Spec
Display the generated spec and ask:
```
Here is the specification I've generated:
{spec content}
Is this specification correct?
1. Yes, proceed to plan generation
2. No, let me edit (opens for inline edits)
3. Start over with different inputs
```
## Plan Generation
After spec approval, generate `conductor/tracks/{trackId}/plan.md`:
### Plan Structure
```markdown
# Implementation Plan: {Track Title}
**Track ID:** {trackId}
**Spec:** [spec.md](./spec.md)
**Created:** {YYYY-MM-DD}
**Status:** [ ] Not Started
## Overview
{Brief summary of implementation approach}
## Phase 1: {Phase Name}
{Phase description}
### Tasks
- [ ] Task 1.1: {Description}
- [ ] Task 1.2: {Description}
- [ ] Task 1.3: {Description}
### Verification
- [ ] {Verification step for phase 1}
## Phase 2: {Phase Name}
{Phase description}
### Tasks
- [ ] Task 2.1: {Description}
- [ ] Task 2.2: {Description}
### Verification
- [ ] {Verification step for phase 2}
## Phase 3: {Phase Name} (if needed)
...
## Final Verification
- [ ] All acceptance criteria met
- [ ] Tests passing
- [ ] Documentation updated (if applicable)
- [ ] Ready for review
---
_Generated by Conductor. Tasks will be marked [~] in progress and [x] complete._
```
### Phase Guidelines
- Group related tasks into logical phases
- Each phase should be independently verifiable
- Include verification task after each phase
- TDD tracks: Include test writing tasks before implementation tasks
- Typical structure:
1. **Setup/Foundation** - Initial scaffolding, interfaces
2. **Core Implementation** - Main functionality
3. **Integration** - Connect with existing system
4. **Polish** - Error handling, edge cases, docs
## User Review of Plan
Display the generated plan and ask:
```
Here is the implementation plan:
{plan content}
Is this plan correct?
1. Yes, create the track
2. No, let me edit (opens for inline edits)
3. Add more phases/tasks
4. Start over
```
## Track Creation
After plan approval:
1. Create directory structure:
```
conductor/tracks/{trackId}/
├── spec.md
├── plan.md
├── metadata.json
└── index.md
```
2. Create `metadata.json`:
```json
{
"id": "{trackId}",
"title": "{Track Title}",
"type": "feature|bug|chore|refactor",
"status": "pending",
"created": "ISO_TIMESTAMP",
"updated": "ISO_TIMESTAMP",
"phases": {
"total": N,
"completed": 0
},
"tasks": {
"total": M,
"completed": 0
}
}
```
3. Create `index.md`:
```markdown
# Track: {Track Title}
**ID:** {trackId}
**Status:** Pending
## Documents
- [Specification](./spec.md)
- [Implementation Plan](./plan.md)
## Progress
- Phases: 0/{N} complete
- Tasks: 0/{M} complete
## Quick Links
- [Back to Tracks](../../tracks.md)
- [Product Context](../../product.md)
```
4. Register in `conductor/tracks.md`:
- Add row to tracks table
- Format: `| [ ] | {trackId} | {title} | {created} | {created} |`
5. Update `conductor/index.md`:
- Add track to "Active Tracks" section
## Completion Message
```
Track created successfully!
Track ID: {trackId}
Location: conductor/tracks/{trackId}/
Files created:
- spec.md - Requirements specification
- plan.md - Phased implementation plan
- metadata.json - Track metadata
- index.md - Track navigation
Next steps:
1. Review spec.md and plan.md, make any edits
2. Run /conductor:implement {trackId} to start implementation
3. Run /conductor:status to see project progress
```
## Error Handling
- If directory creation fails: Halt and report, do not register in tracks.md
- If any file write fails: Clean up partial track, report error
- If tracks.md update fails: Warn user to manually register trackRelated Skills
tracking-token-launches
Track new token launches across DEXes with risk analysis and contract verification. Use when discovering new token launches, monitoring IDOs, or analyzing token contracts. Trigger with phrases like "track launches", "find new tokens", "new pairs on uniswap", "token risk analysis", or "monitor IDOs".
tracking-service-reliability
Define and track SLAs, SLIs, and SLOs for service reliability including availability, latency, and error rates. Use when establishing reliability targets or monitoring service health. Trigger with phrases like "define SLOs", "track SLI metrics", or "calculate error budget".
tracking-resource-usage
Track and optimize resource usage across application stack including CPU, memory, disk, and network I/O. Use when identifying bottlenecks or optimizing costs. Trigger with phrases like "track resource usage", "monitor CPU and memory", or "optimize resource allocation".
tracking-model-versions
Build this skill enables AI assistant to track and manage ai/ml model versions using the model-versioning-tracker plugin. it should be used when the user asks to manage model versions, track model lineage, log model performance, or implement version control f... Use when appropriate context detected. Trigger with relevant phrases based on skill purpose.
tracking-crypto-prices
Track real-time cryptocurrency prices across exchanges with historical data and alerts. Provides price data infrastructure for dependent skills (portfolio, tax, DeFi, arbitrage). Use when checking crypto prices, monitoring markets, or fetching historical price data. Trigger with phrases like "check price", "BTC price", "crypto prices", "price history", "get quote for", "what's ETH trading at", "show me top coins", or "track my watchlist".
tracking-crypto-portfolio
Track cryptocurrency portfolio with real-time valuations, allocation analysis, and P&L tracking. Use when checking portfolio value, viewing holdings breakdown, analyzing allocations, or exporting portfolio data. Trigger with phrases like "show my portfolio", "check crypto holdings", "portfolio allocation", "track my crypto", or "export portfolio".
tracking-crypto-derivatives
Track cryptocurrency futures, options, and perpetual swaps with funding rates, open interest, liquidations, and comprehensive derivatives market analysis. Use when monitoring derivatives markets, analyzing funding rates, tracking open interest, finding liquidation levels, or researching options flow. Trigger with phrases like "funding rate", "open interest", "perpetual swap", "futures basis", "liquidation levels", "options flow", "put call ratio", "derivatives analysis", or "BTC perps".
tracking-application-response-times
Track and optimize application response times across API endpoints, database queries, and service calls. Use when monitoring performance or identifying bottlenecks. Trigger with phrases like "track response times", "monitor API performance", or "analyze latency".
setting-up-experiment-tracking
Implement machine learning experiment tracking using MLflow or Weights & Biases. Configures environment and provides code for logging parameters, metrics, and artifacts. Use when asked to "setup experiment tracking" or "initialize MLflow". Trigger with relevant phrases based on skill purpose.
tracking-regression-tests
This skill enables Claude to track and run regression tests, ensuring new changes don't break existing functionality. It is triggered when the user asks to "track regression", "run regression tests", or uses the shortcut "reg". The skill helps in maintaining code stability by identifying critical tests, automating their execution, and analyzing the impact of changes. It also provides insights into test history and identifies flaky tests. The skill uses the `regression-test-tracker` plugin.
okr-tracker-creator
Okr Tracker Creator - Auto-activating skill for Enterprise Workflows. Triggers on: okr tracker creator, okr tracker creator Part of the Enterprise Workflows skill category.
mlflow-tracking-setup
Mlflow Tracking Setup - Auto-activating skill for ML Training. Triggers on: mlflow tracking setup, mlflow tracking setup Part of the ML Training skill category.