OpenAI Realtime API — Voice-Native AI Conversations

## Overview

25 stars

Best use case

OpenAI Realtime API — Voice-Native AI Conversations is best used when you need a repeatable AI agent workflow instead of a one-off prompt.

## Overview

Teams using OpenAI Realtime API — Voice-Native AI Conversations should expect a more consistent output, faster repeated execution, less prompt rewriting.

When to use this skill

  • You want a reusable workflow that can be run more than once with consistent structure.

When not to use this skill

  • You only need a quick one-off answer and do not need a reusable workflow.
  • You cannot install or maintain the underlying files, dependencies, or repository context.

Installation

Claude Code / Cursor / Codex

$curl -o ~/.claude/skills/openai-realtime/SKILL.md --create-dirs "https://raw.githubusercontent.com/ComeOnOliver/skillshub/main/skills/TerminalSkills/skills/openai-realtime/SKILL.md"

Manual Installation

  1. Download SKILL.md from GitHub
  2. Place it in .claude/skills/openai-realtime/SKILL.md inside your project
  3. Restart your AI agent — it will auto-discover the skill

How OpenAI Realtime API — Voice-Native AI Conversations Compares

Feature / AgentOpenAI Realtime API — Voice-Native AI ConversationsStandard Approach
Platform SupportNot specifiedLimited / Varies
Context Awareness High Baseline
Installation ComplexityUnknownN/A

Frequently Asked Questions

What does this skill do?

## Overview

Where can I find the source code?

You can find the source code on GitHub using the link provided at the top of the page.

SKILL.md Source

# OpenAI Realtime API — Voice-Native AI Conversations

## Overview

You are an expert in the OpenAI Realtime API, the WebSocket-based interface for building voice-native AI applications. You help developers build conversational voice agents that process audio input directly (no separate STT step), generate spoken responses with natural intonation, handle interruptions, and use function calling — all in a single streaming connection with sub-second latency.

## Instructions

### WebSocket Connection

```typescript
// Connect to OpenAI Realtime API
import WebSocket from "ws";

const ws = new WebSocket("wss://api.openai.com/v1/realtime?model=gpt-4o-realtime-preview", {
  headers: {
    "Authorization": `Bearer ${process.env.OPENAI_API_KEY}`,
    "OpenAI-Beta": "realtime=v1",
  },
});

ws.on("open", () => {
  // Configure the session
  ws.send(JSON.stringify({
    type: "session.update",
    session: {
      modalities: ["text", "audio"],
      voice: "alloy",                      // alloy, echo, fable, onyx, nova, shimmer
      instructions: `You are a helpful dental clinic receptionist named Ava.
        Be warm, professional, and concise. Use short sentences appropriate for phone calls.
        If asked about medical advice, say you'll transfer to the dentist.`,
      input_audio_format: "pcm16",         // 16-bit PCM, 24kHz
      output_audio_format: "pcm16",
      input_audio_transcription: {
        model: "whisper-1",                 // Also transcribe for logging
      },
      turn_detection: {
        type: "server_vad",                 // Server-side voice activity detection
        threshold: 0.5,                     // Sensitivity (0-1)
        prefix_padding_ms: 300,             // Include 300ms before speech start
        silence_duration_ms: 500,           // 500ms silence = end of turn
      },
      tools: [                              // Function calling tools
        {
          type: "function",
          name: "check_availability",
          description: "Check available appointment slots",
          parameters: {
            type: "object",
            properties: {
              date: { type: "string", description: "Date in YYYY-MM-DD format" },
              procedure: { type: "string", enum: ["cleaning", "filling", "crown", "consultation"] },
            },
            required: ["date", "procedure"],
          },
        },
        {
          type: "function",
          name: "book_appointment",
          description: "Book an appointment for a patient",
          parameters: {
            type: "object",
            properties: {
              patient_name: { type: "string" },
              phone: { type: "string" },
              date: { type: "string" },
              time: { type: "string" },
              procedure: { type: "string" },
            },
            required: ["patient_name", "date", "time", "procedure"],
          },
        },
      ],
    },
  }));
});

// Handle events from OpenAI
ws.on("message", (data) => {
  const event = JSON.parse(data.toString());

  switch (event.type) {
    case "response.audio.delta":
      // Stream audio chunks to speaker/WebRTC
      const audioChunk = Buffer.from(event.delta, "base64");
      sendToSpeaker(audioChunk);
      break;

    case "response.audio_transcript.delta":
      // Real-time transcript of AI's response
      process.stdout.write(event.delta);
      break;

    case "conversation.item.input_audio_transcription.completed":
      // User's speech transcribed
      console.log(`\nUser said: ${event.transcript}`);
      break;

    case "response.function_call_arguments.done":
      // AI wants to call a function
      handleFunctionCall(event.name, JSON.parse(event.arguments));
      break;

    case "input_audio_buffer.speech_started":
      // User started speaking — interrupt AI if it's talking
      console.log("[User interruption detected]");
      break;
  }
});

// Send microphone audio
function sendAudio(pcmBuffer: Buffer) {
  ws.send(JSON.stringify({
    type: "input_audio_buffer.append",
    audio: pcmBuffer.toString("base64"),
  }));
}

// Handle function calls
async function handleFunctionCall(name: string, args: any) {
  let result: string;

  if (name === "check_availability") {
    const slots = await checkClinicSlots(args.date, args.procedure);
    result = JSON.stringify(slots);
  } else if (name === "book_appointment") {
    const booking = await createAppointment(args);
    result = JSON.stringify(booking);
  }

  // Send function result back — AI will speak the response
  ws.send(JSON.stringify({
    type: "conversation.item.create",
    item: {
      type: "function_call_output",
      call_id: event.call_id,
      output: result,
    },
  }));

  // Trigger AI to respond with the function result
  ws.send(JSON.stringify({ type: "response.create" }));
}
```

### Python SDK

```python
# Using OpenAI Python SDK
from openai import AsyncOpenAI

client = AsyncOpenAI()

async def run_voice_agent():
    async with client.beta.realtime.connect(
        model="gpt-4o-realtime-preview"
    ) as connection:
        await connection.session.update(session={
            "modalities": ["text", "audio"],
            "voice": "nova",
            "instructions": "You are a helpful assistant.",
            "turn_detection": {"type": "server_vad"},
        })

        # Send audio from microphone
        await connection.input_audio_buffer.append(audio=base64_audio)

        # Process events
        async for event in connection:
            if event.type == "response.audio.delta":
                play_audio(event.delta)
            elif event.type == "response.done":
                print("AI finished speaking")
```

## Key Concepts

- **Audio-native** — The model processes audio directly, understanding tone, emotion, and emphasis (not just text transcription)
- **Server VAD** — OpenAI's server detects when the user starts/stops speaking; no client-side VAD needed
- **Interruptions** — When the user speaks while AI is talking, the response is automatically interrupted
- **Function calling** — Same as Chat Completions function calling, but in real-time during voice conversation

## Examples

**Example 1: User asks to set up openai-realtime**

User: "Help me set up openai-realtime for my project"

The agent should:
1. Check system requirements and prerequisites
2. Install or configure openai-realtime
3. Set up initial project structure
4. Verify the setup works correctly

**Example 2: User asks to build a feature with openai-realtime**

User: "Create a dashboard using openai-realtime"

The agent should:
1. Scaffold the component or configuration
2. Connect to the appropriate data source
3. Implement the requested feature
4. Test and validate the output

## Guidelines

1. **Server VAD for simplicity** — Use `server_vad` turn detection; OpenAI handles speech detection, silence, and interruptions
2. **PCM16 format** — Use 16-bit PCM at 24kHz for both input and output; minimal encoding overhead
3. **Short instructions** — Keep system instructions concise; the model processes them with every turn
4. **Function calls for actions** — Use tools for bookings, lookups, and transfers; the model speaks the result naturally
5. **Input transcription** — Enable `input_audio_transcription` for logging and analytics; small additional cost
6. **Silence threshold tuning** — 500ms silence_duration for responsive agents; 1000ms for dictation (avoids mid-sentence cuts)
7. **Voice selection** — `nova` for friendly female, `onyx` for authoritative male, `alloy` for neutral; test with your use case
8. **Cost awareness** — Realtime API costs ~$0.06/min input + $0.24/min output audio; use for high-value interactions (sales, support), not bulk processing

Related Skills

defold-native-extension-editing

25
from ComeOnOliver/skillshub

Defold native extension development. Use when creating or editing C/C++ (.c, .cpp, .h, .hpp), JavaScript (.js), or manifest files in native extension directories (src/, include/, lib/, api/).

java-add-graalvm-native-image-support

25
from ComeOnOliver/skillshub

GraalVM Native Image expert that adds native image support to Java applications, builds the project, analyzes build errors, applies fixes, and iterates until successful compilation using Oracle best practices.

Voice Call

25
from ComeOnOliver/skillshub

Use the voice-call plugin to start or inspect calls (Twilio, Telnyx, Plivo, or mock).

OpenAI Whisper API (curl)

25
from ComeOnOliver/skillshub

Transcribe an audio file via OpenAI’s `/v1/audio/transcriptions` endpoint.

OpenAI Image Gen

25
from ComeOnOliver/skillshub

Generate a handful of “random but structured” prompts and render them via the OpenAI Images API.

fixing-claude-export-conversations

25
from ComeOnOliver/skillshub

Fixes broken line wrapping in Claude Code exported conversation files (.txt), reconstructing tables, paragraphs, paths, and tool calls that were hard-wrapped at fixed column widths. Includes an automated validation suite (generic, file-agnostic checks). Triggers when the user has a Claude Code export file with broken formatting, mentions "fix export", "fix conversation", "exported conversation", "make export readable", references a file matching YYYY-MM-DD-HHMMSS-*.txt, or has a .txt file with broken tables, split paths, or mangled tool output from Claude Code.

upgrading-react-native

25
from ComeOnOliver/skillshub

Upgrades React Native apps to newer versions by applying rn-diff-purge template diffs, updating package.json dependencies, migrating native iOS and Android configuration, resolving CocoaPods and Gradle changes, and handling breaking API updates. Use when upgrading React Native, bumping RN version, updating from RN 0.x to 0.y, or migrating Expo SDK alongside a React Native upgrade.

react-native-brownfield-migration

25
from ComeOnOliver/skillshub

Provides an incremental adoption strategy to migrate native iOS or Android apps to React Native or Expo using @callstack/react-native-brownfield for initial setup. Use when planning migration steps, packaging XCFramework/AAR artifacts, and integrating them into host apps.

react-native-design

25
from ComeOnOliver/skillshub

Master React Native styling, navigation, and Reanimated animations for cross-platform mobile development. Use when building React Native apps, implementing navigation patterns, or creating performant animations.

vercel-react-native-skills

25
from ComeOnOliver/skillshub

React Native and Expo best practices for building performant mobile apps. Use when building React Native components, optimizing list performance, implementing animations, or working with native modules. Triggers on tasks involving React Native, Expo, mobile performance, or native platform APIs.

difficult-workplace-conversations

25
from ComeOnOliver/skillshub

Structured approach to workplace conflicts, performance discussions, and challenging feedback using preparation-delivery-followup framework. Use when preparing for tough conversations, addressing conflicts, giving critical feedback, or navigating sensitive workplace discussions.

voice-ai-engine-development

25
from ComeOnOliver/skillshub

Build real-time conversational AI voice engines using async worker pipelines, streaming transcription, LLM agents, and TTS synthesis with interrupt handling and multi-provider support