multiAI Summary Pending
openai-whisper
Local speech-to-text with the Whisper CLI (no API key).
272 stars
Installation
Claude Code / Cursor / Codex
$curl -o ~/.claude/skills/openai-whisper/SKILL.md --create-dirs "https://raw.githubusercontent.com/TermiX-official/cryptoclaw/main/skills/openai-whisper/SKILL.md"
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/openai-whisper/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How openai-whisper Compares
| Feature / Agent | openai-whisper | Standard Approach |
|---|---|---|
| Platform Support | multi | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | Unknown | N/A |
Frequently Asked Questions
What does this skill do?
Local speech-to-text with the Whisper CLI (no API key).
Which AI agents support this skill?
This skill is compatible with multi.
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
SKILL.md Source
# Whisper (CLI) Use `whisper` to transcribe audio locally. Quick start - `whisper /path/audio.mp3 --model medium --output_format txt --output_dir .` - `whisper /path/audio.m4a --task translate --output_format srt` Notes - Models download to `~/.cache/whisper` on first run. - `--model` defaults to `turbo` on this install. - Use smaller models for speed, larger for accuracy.