deepgram-hello-world

Create a minimal working Deepgram transcription example. Use when starting a new Deepgram integration, testing your setup, or learning basic Deepgram API patterns. Trigger: "deepgram hello world", "deepgram example", "deepgram quick start", "simple transcription", "transcribe audio".

25 stars

Best use case

deepgram-hello-world is best used when you need a repeatable AI agent workflow instead of a one-off prompt.

Create a minimal working Deepgram transcription example. Use when starting a new Deepgram integration, testing your setup, or learning basic Deepgram API patterns. Trigger: "deepgram hello world", "deepgram example", "deepgram quick start", "simple transcription", "transcribe audio".

Teams using deepgram-hello-world should expect a more consistent output, faster repeated execution, less prompt rewriting.

When to use this skill

  • You want a reusable workflow that can be run more than once with consistent structure.

When not to use this skill

  • You only need a quick one-off answer and do not need a reusable workflow.
  • You cannot install or maintain the underlying files, dependencies, or repository context.

Installation

Claude Code / Cursor / Codex

$curl -o ~/.claude/skills/deepgram-hello-world/SKILL.md --create-dirs "https://raw.githubusercontent.com/ComeOnOliver/skillshub/main/skills/jeremylongshore/claude-code-plugins-plus-skills/deepgram-hello-world/SKILL.md"

Manual Installation

  1. Download SKILL.md from GitHub
  2. Place it in .claude/skills/deepgram-hello-world/SKILL.md inside your project
  3. Restart your AI agent — it will auto-discover the skill

How deepgram-hello-world Compares

Feature / Agentdeepgram-hello-worldStandard Approach
Platform SupportNot specifiedLimited / Varies
Context Awareness High Baseline
Installation ComplexityUnknownN/A

Frequently Asked Questions

What does this skill do?

Create a minimal working Deepgram transcription example. Use when starting a new Deepgram integration, testing your setup, or learning basic Deepgram API patterns. Trigger: "deepgram hello world", "deepgram example", "deepgram quick start", "simple transcription", "transcribe audio".

Where can I find the source code?

You can find the source code on GitHub using the link provided at the top of the page.

SKILL.md Source

# Deepgram Hello World

## Overview
Minimal working examples for Deepgram speech-to-text. Transcribe an audio URL in 5 lines with `createClient` + `listen.prerecorded.transcribeUrl`. Includes local file transcription, Python equivalent, and Nova-3 model selection.

## Prerequisites
- `npm install @deepgram/sdk` completed
- `DEEPGRAM_API_KEY` environment variable set
- Audio source: URL or local file (WAV, MP3, FLAC, OGG, M4A)

## Instructions

### Step 1: Transcribe Audio from URL (TypeScript)

```typescript
import { createClient } from '@deepgram/sdk';

const deepgram = createClient(process.env.DEEPGRAM_API_KEY!);

async function main() {
  const { result, error } = await deepgram.listen.prerecorded.transcribeUrl(
    { url: 'https://static.deepgram.com/examples/Bueller-Life-moves-702702706.wav' },
    {
      model: 'nova-3',        // Latest model — best accuracy
      smart_format: true,     // Auto-punctuation, paragraphs, numerals
      language: 'en',
    }
  );

  if (error) throw error;

  const transcript = result.results.channels[0].alternatives[0].transcript;
  console.log('Transcript:', transcript);
  console.log('Confidence:', result.results.channels[0].alternatives[0].confidence);
}

main();
```

### Step 2: Transcribe a Local File

```typescript
import { createClient } from '@deepgram/sdk';
import { readFileSync } from 'fs';

const deepgram = createClient(process.env.DEEPGRAM_API_KEY!);

async function transcribeFile(filePath: string) {
  const audio = readFileSync(filePath);

  const { result, error } = await deepgram.listen.prerecorded.transcribeFile(
    audio,
    {
      model: 'nova-3',
      smart_format: true,
      // Deepgram auto-detects format, but you can specify:
      mimetype: 'audio/wav',
    }
  );

  if (error) throw error;

  console.log(result.results.channels[0].alternatives[0].transcript);
}

transcribeFile('./meeting-recording.wav');
```

### Step 3: Python Equivalent

```python
import os
from deepgram import DeepgramClient, PrerecordedOptions

client = DeepgramClient(os.environ["DEEPGRAM_API_KEY"])

# URL transcription
url = {"url": "https://static.deepgram.com/examples/Bueller-Life-moves-702702706.wav"}
options = PrerecordedOptions(model="nova-3", smart_format=True, language="en")

response = client.listen.rest.v("1").transcribe_url(url, options)
transcript = response.results.channels[0].alternatives[0].transcript
print(f"Transcript: {transcript}")
print(f"Confidence: {response.results.channels[0].alternatives[0].confidence}")
```

```python
# Local file transcription
with open("meeting.wav", "rb") as audio:
    source = {"buffer": audio.read(), "mimetype": "audio/wav"}
    response = client.listen.rest.v("1").transcribe_file(source, options)
    print(response.results.channels[0].alternatives[0].transcript)
```

### Step 4: Add Features

```typescript
// Enable diarization (speaker identification)
const { result } = await deepgram.listen.prerecorded.transcribeUrl(
  { url: audioUrl },
  {
    model: 'nova-3',
    smart_format: true,
    diarize: true,         // Speaker labels
    utterances: true,      // Turn-by-turn segments
    paragraphs: true,      // Paragraph formatting
  }
);

// Print speaker-labeled output
if (result.results.utterances) {
  for (const utterance of result.results.utterances) {
    console.log(`Speaker ${utterance.speaker}: ${utterance.transcript}`);
  }
}
```

### Step 5: Explore Model Options

| Model | Use Case | Speed | Accuracy |
|-------|----------|-------|----------|
| `nova-3` | General — best accuracy | Fast | Highest |
| `nova-2` | General — proven stable | Fast | Very High |
| `nova-2-meeting` | Conference rooms, multiple speakers | Fast | High |
| `nova-2-phonecall` | Low-bandwidth phone audio | Fast | High |
| `base` | Cost-sensitive, high-volume | Fastest | Good |
| `whisper-large` | Multilingual (100+ languages) | Slow | High |

### Step 6: Run It

```bash
# TypeScript
npx tsx hello-deepgram.ts

# Python
python hello_deepgram.py
```

## Output
- Working transcription from URL or local file
- Printed transcript text with confidence score
- Optional: speaker-labeled utterances

## Error Handling
| Error | Cause | Solution |
|-------|-------|----------|
| `401 Unauthorized` | Invalid API key | Check `DEEPGRAM_API_KEY` |
| `400 Bad Request` | Unsupported audio format | Use WAV, MP3, FLAC, OGG, or M4A |
| Empty transcript | No speech in audio | Verify audio has audible speech |
| `ENOTFOUND` | URL not reachable | Check audio URL is publicly accessible |
| `Cannot find module '@deepgram/sdk'` | SDK not installed | Run `npm install @deepgram/sdk` |

## Resources
- [Pre-recorded Audio Guide](https://developers.deepgram.com/docs/pre-recorded-audio)
- [Model Options](https://developers.deepgram.com/docs/model)
- [Smart Formatting](https://developers.deepgram.com/docs/smart-format)
- [Sample Audio Files](https://static.deepgram.com/examples/)

## Next Steps
Proceed to `deepgram-core-workflow-a` for production transcription patterns or `deepgram-core-workflow-b` for live streaming.

Related Skills

exa-hello-world

25
from ComeOnOliver/skillshub

Create a minimal working Exa search example with real results. Use when starting a new Exa integration, testing your setup, or learning basic search, searchAndContents, and findSimilar patterns. Trigger with phrases like "exa hello world", "exa example", "exa quick start", "simple exa search", "first exa query".

evernote-hello-world

25
from ComeOnOliver/skillshub

Create a minimal working Evernote example. Use when starting a new Evernote integration, testing your setup, or learning basic Evernote API patterns. Trigger with phrases like "evernote hello world", "evernote example", "evernote quick start", "simple evernote code", "create first note".

elevenlabs-hello-world

25
from ComeOnOliver/skillshub

Generate your first ElevenLabs text-to-speech audio file. Use when starting a new ElevenLabs integration, testing your setup, or learning basic TTS API patterns. Trigger: "elevenlabs hello world", "elevenlabs example", "elevenlabs quick start", "first elevenlabs TTS", "text to speech demo".

documenso-hello-world

25
from ComeOnOliver/skillshub

Create a minimal working Documenso example. Use when starting a new Documenso integration, testing your setup, or learning basic document signing patterns. Trigger with phrases like "documenso hello world", "documenso example", "documenso quick start", "simple documenso code", "first document".

deepgram-webhooks-events

25
from ComeOnOliver/skillshub

Implement Deepgram callback and webhook handling for async transcription. Use when implementing callback URLs, processing async transcription results, or handling Deepgram event notifications. Trigger: "deepgram callback", "deepgram webhook", "async transcription", "deepgram events", "deepgram notifications", "deepgram async".

deepgram-upgrade-migration

25
from ComeOnOliver/skillshub

Plan and execute Deepgram SDK upgrades and model migrations. Use when upgrading SDK versions (v3->v4->v5), migrating models (Nova-2 to Nova-3), or planning API version transitions. Trigger: "upgrade deepgram", "deepgram migration", "update deepgram SDK", "deepgram version upgrade", "nova-3 migration".

deepgram-security-basics

25
from ComeOnOliver/skillshub

Apply Deepgram security best practices for API key management and data protection. Use when securing Deepgram integrations, implementing key rotation, or auditing security configurations. Trigger: "deepgram security", "deepgram API key security", "secure deepgram", "deepgram key rotation", "deepgram data protection", "deepgram PII redaction".

deepgram-sdk-patterns

25
from ComeOnOliver/skillshub

Apply production-ready Deepgram SDK patterns for TypeScript and Python. Use when implementing Deepgram integrations, refactoring SDK usage, or establishing team coding standards for Deepgram. Trigger: "deepgram SDK patterns", "deepgram best practices", "deepgram code patterns", "idiomatic deepgram", "deepgram typescript".

deepgram-reference-architecture

25
from ComeOnOliver/skillshub

Implement Deepgram reference architecture for scalable transcription systems. Use when designing transcription pipelines, building production architectures, or planning Deepgram integration at scale. Trigger: "deepgram architecture", "transcription pipeline", "deepgram system design", "deepgram at scale", "enterprise deepgram", "deepgram queue".

deepgram-rate-limits

25
from ComeOnOliver/skillshub

Implement Deepgram rate limiting and backoff strategies. Use when handling API quotas, implementing request throttling, or dealing with 429 rate limit errors. Trigger: "deepgram rate limit", "deepgram throttling", "429 error deepgram", "deepgram quota", "deepgram backoff", "deepgram concurrency".

deepgram-prod-checklist

25
from ComeOnOliver/skillshub

Execute Deepgram production deployment checklist. Use when preparing for production launch, auditing production readiness, or verifying deployment configurations. Trigger: "deepgram production", "deploy deepgram", "deepgram prod checklist", "deepgram go-live", "production ready deepgram".

deepgram-performance-tuning

25
from ComeOnOliver/skillshub

Optimize Deepgram API performance for faster transcription and lower latency. Use when improving transcription speed, reducing latency, or optimizing audio processing pipelines. Trigger: "deepgram performance", "speed up deepgram", "optimize transcription", "deepgram latency", "deepgram faster", "deepgram throughput".