multiAI Summary Pending
dl-transformer-finetune
Build transformer fine-tuning run plans with task settings, hyperparameters, and model-card outputs. Use for repeatable Hugging Face or PyTorch finetuning workflows.
3,556 stars
byopenclaw
Installation
Claude Code / Cursor / Codex
$curl -o ~/.claude/skills/dl-transformer-finetune/SKILL.md --create-dirs "https://raw.githubusercontent.com/openclaw/skills/main/skills/0x-professor/dl-transformer-finetune/SKILL.md"
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/dl-transformer-finetune/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How dl-transformer-finetune Compares
| Feature / Agent | dl-transformer-finetune | Standard Approach |
|---|---|---|
| Platform Support | multi | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | Unknown | N/A |
Frequently Asked Questions
What does this skill do?
Build transformer fine-tuning run plans with task settings, hyperparameters, and model-card outputs. Use for repeatable Hugging Face or PyTorch finetuning workflows.
Which AI agents support this skill?
This skill is compatible with multi.
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
SKILL.md Source
# DL Transformer Finetune ## Overview Generate reproducible fine-tuning run plans for transformer models and downstream tasks. ## Workflow 1. Define base model, task type, and dataset. 2. Set training hyperparameters and evaluation cadence. 3. Produce run plan plus model card skeleton. 4. Export configuration-ready artifacts for training pipelines. ## Use Bundled Resources - Run `scripts/build_finetune_plan.py` for deterministic plan output. - Read `references/finetune-guide.md` for hyperparameter baseline guidance. ## Guardrails - Keep run plans reproducible with explicit seeds and output directories. - Include evaluation and rollback criteria.