multiAI Summary Pending

ml-experiment-tracker

Plan reproducible ML experiment runs with explicit parameters, metrics, and artifacts. Use before model training to standardize tracking-ready experiment definitions.

3,556 stars

Installation

Claude Code / Cursor / Codex

$curl -o ~/.claude/skills/ml-experiment-tracker/SKILL.md --create-dirs "https://raw.githubusercontent.com/openclaw/skills/main/skills/0x-professor/ml-experiment-tracker/SKILL.md"

Manual Installation

  1. Download SKILL.md from GitHub
  2. Place it in .claude/skills/ml-experiment-tracker/SKILL.md inside your project
  3. Restart your AI agent — it will auto-discover the skill

How ml-experiment-tracker Compares

Feature / Agentml-experiment-trackerStandard Approach
Platform SupportmultiLimited / Varies
Context Awareness High Baseline
Installation ComplexityUnknownN/A

Frequently Asked Questions

What does this skill do?

Plan reproducible ML experiment runs with explicit parameters, metrics, and artifacts. Use before model training to standardize tracking-ready experiment definitions.

Which AI agents support this skill?

This skill is compatible with multi.

Where can I find the source code?

You can find the source code on GitHub using the link provided at the top of the page.

SKILL.md Source

# ML Experiment Tracker

## Overview

Generate structured experiment plans that can be logged consistently in experiment tracking systems.

## Workflow

1. Define dataset, target task, model family, and parameter search space.
2. Define metrics and acceptance thresholds before training.
3. Produce run plan with version and artifact expectations.
4. Export the run plan for execution in tracking tools.

## Use Bundled Resources

- Run `scripts/build_experiment_plan.py` to generate consistent run plans.
- Read `references/tracking-guide.md` for reproducibility checklist.

## Guardrails

- Keep inputs explicit and machine-readable.
- Always include metrics and baseline criteria.