multiAI Summary Pending
ml-experiment-tracker
Plan reproducible ML experiment runs with explicit parameters, metrics, and artifacts. Use before model training to standardize tracking-ready experiment definitions.
3,556 stars
byopenclaw
Installation
Claude Code / Cursor / Codex
$curl -o ~/.claude/skills/ml-experiment-tracker/SKILL.md --create-dirs "https://raw.githubusercontent.com/openclaw/skills/main/skills/0x-professor/ml-experiment-tracker/SKILL.md"
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/ml-experiment-tracker/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How ml-experiment-tracker Compares
| Feature / Agent | ml-experiment-tracker | Standard Approach |
|---|---|---|
| Platform Support | multi | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | Unknown | N/A |
Frequently Asked Questions
What does this skill do?
Plan reproducible ML experiment runs with explicit parameters, metrics, and artifacts. Use before model training to standardize tracking-ready experiment definitions.
Which AI agents support this skill?
This skill is compatible with multi.
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
SKILL.md Source
# ML Experiment Tracker ## Overview Generate structured experiment plans that can be logged consistently in experiment tracking systems. ## Workflow 1. Define dataset, target task, model family, and parameter search space. 2. Define metrics and acceptance thresholds before training. 3. Produce run plan with version and artifact expectations. 4. Export the run plan for execution in tracking tools. ## Use Bundled Resources - Run `scripts/build_experiment_plan.py` to generate consistent run plans. - Read `references/tracking-guide.md` for reproducibility checklist. ## Guardrails - Keep inputs explicit and machine-readable. - Always include metrics and baseline criteria.