OpenAI SDK

## Overview

25 stars

Best use case

OpenAI SDK is best used when you need a repeatable AI agent workflow instead of a one-off prompt.

## Overview

Teams using OpenAI SDK should expect a more consistent output, faster repeated execution, less prompt rewriting.

When to use this skill

  • You want a reusable workflow that can be run more than once with consistent structure.

When not to use this skill

  • You only need a quick one-off answer and do not need a reusable workflow.
  • You cannot install or maintain the underlying files, dependencies, or repository context.

Installation

Claude Code / Cursor / Codex

$curl -o ~/.claude/skills/openai-sdk/SKILL.md --create-dirs "https://raw.githubusercontent.com/ComeOnOliver/skillshub/main/skills/TerminalSkills/skills/openai-sdk/SKILL.md"

Manual Installation

  1. Download SKILL.md from GitHub
  2. Place it in .claude/skills/openai-sdk/SKILL.md inside your project
  3. Restart your AI agent — it will auto-discover the skill

How OpenAI SDK Compares

Feature / AgentOpenAI SDKStandard Approach
Platform SupportNot specifiedLimited / Varies
Context Awareness High Baseline
Installation ComplexityUnknownN/A

Frequently Asked Questions

What does this skill do?

## Overview

Where can I find the source code?

You can find the source code on GitHub using the link provided at the top of the page.

SKILL.md Source

# OpenAI SDK

## Overview

The OpenAI SDK provides access to GPT-4o, o1, DALL-E 3, Whisper, and embedding models. This skill covers the Chat Completions API (text generation, conversation, function calling), streaming responses, vision (image understanding), embeddings, the Assistants API (stateful agents), DALL-E image generation, Whisper transcription, and content moderation. Examples in both TypeScript/Node.js and Python.

## Instructions

### Step 1: Installation and Setup

```bash
# Node.js
npm install openai

# Python
pip install openai
```

```typescript
// lib/openai.ts — Client initialization
import OpenAI from 'openai'

const openai = new OpenAI({
  apiKey: process.env.OPENAI_API_KEY,    // or set OPENAI_API_KEY env var
})
```

### Step 2: Chat Completions

```typescript
// chat.ts — Basic chat completion and conversation
import OpenAI from 'openai'

const openai = new OpenAI()

// Single message
const response = await openai.chat.completions.create({
  model: 'gpt-4o',
  messages: [
    { role: 'system', content: 'You are a helpful coding assistant.' },
    { role: 'user', content: 'Write a Python function to merge two sorted lists.' },
  ],
  temperature: 0.7,       // 0 = deterministic, 2 = creative
  max_tokens: 1000,
})

console.log(response.choices[0].message.content)

// Multi-turn conversation (maintain message history)
const messages: OpenAI.ChatCompletionMessageParam[] = [
  { role: 'system', content: 'You are a data analysis assistant.' },
]

async function chat(userMessage: string) {
  messages.push({ role: 'user', content: userMessage })

  const response = await openai.chat.completions.create({
    model: 'gpt-4o',
    messages,
  })

  const assistantMessage = response.choices[0].message
  messages.push(assistantMessage)
  return assistantMessage.content
}
```

### Step 3: Streaming Responses

```typescript
// stream.ts — Stream chat completions for real-time UI
const stream = await openai.chat.completions.create({
  model: 'gpt-4o',
  messages: [{ role: 'user', content: 'Explain quantum computing in simple terms.' }],
  stream: true,
})

for await (const chunk of stream) {
  const content = chunk.choices[0]?.delta?.content || ''
  process.stdout.write(content)    // stream to terminal/UI
}
```

```typescript
// Next.js streaming API route
// app/api/chat/route.ts — Server-sent events for streaming to frontend
import { OpenAIStream, StreamingTextResponse } from 'ai'    // Vercel AI SDK helper

export async function POST(req: Request) {
  const { messages } = await req.json()

  const response = await openai.chat.completions.create({
    model: 'gpt-4o',
    messages,
    stream: true,
  })

  const stream = OpenAIStream(response)
  return new StreamingTextResponse(stream)
}
```

### Step 4: Function Calling (Tool Use)

```typescript
// function_calling.ts — Let GPT call your functions
const tools: OpenAI.ChatCompletionTool[] = [
  {
    type: 'function',
    function: {
      name: 'get_weather',
      description: 'Get current weather for a city',
      parameters: {
        type: 'object',
        properties: {
          city: { type: 'string', description: 'City name' },
          units: { type: 'string', enum: ['celsius', 'fahrenheit'] },
        },
        required: ['city'],
      },
    },
  },
  {
    type: 'function',
    function: {
      name: 'search_database',
      description: 'Search the product database',
      parameters: {
        type: 'object',
        properties: {
          query: { type: 'string' },
          category: { type: 'string' },
          max_price: { type: 'number' },
        },
        required: ['query'],
      },
    },
  },
]

const response = await openai.chat.completions.create({
  model: 'gpt-4o',
  messages: [{ role: 'user', content: 'What is the weather in Tokyo?' }],
  tools,
  tool_choice: 'auto',
})

// Handle function call
const toolCall = response.choices[0].message.tool_calls?.[0]
if (toolCall) {
  const args = JSON.parse(toolCall.function.arguments)
  // Execute the actual function
  const result = await getWeather(args.city, args.units)

  // Send result back to GPT
  const finalResponse = await openai.chat.completions.create({
    model: 'gpt-4o',
    messages: [
      { role: 'user', content: 'What is the weather in Tokyo?' },
      response.choices[0].message,
      { role: 'tool', tool_call_id: toolCall.id, content: JSON.stringify(result) },
    ],
    tools,
  })
}
```

### Step 5: Embeddings

```typescript
// embeddings.ts — Generate embeddings for semantic search
const response = await openai.embeddings.create({
  model: 'text-embedding-3-small',    // 1536 dimensions, cheapest
  input: 'How to deploy a Node.js app to production',
})

const embedding = response.data[0].embedding    // number[] of length 1536

// Batch embeddings
const batchResponse = await openai.embeddings.create({
  model: 'text-embedding-3-small',
  input: [
    'First document text',
    'Second document text',
    'Third document text',
  ],
})
// batchResponse.data[0].embedding, batchResponse.data[1].embedding, etc.
```

### Step 6: Vision (Image Understanding)

```typescript
// vision.ts — Analyze images with GPT-4o
const response = await openai.chat.completions.create({
  model: 'gpt-4o',
  messages: [{
    role: 'user',
    content: [
      { type: 'text', text: 'What is in this image? Describe in detail.' },
      { type: 'image_url', image_url: { url: 'https://example.com/photo.jpg' } },
    ],
  }],
})

// Base64 image (from file upload)
const base64Image = fs.readFileSync('photo.jpg').toString('base64')
const response2 = await openai.chat.completions.create({
  model: 'gpt-4o',
  messages: [{
    role: 'user',
    content: [
      { type: 'text', text: 'Extract all text from this receipt.' },
      { type: 'image_url', image_url: { url: `data:image/jpeg;base64,${base64Image}` } },
    ],
  }],
})
```

### Step 7: Structured Outputs

```typescript
// structured.ts — Get JSON output matching a schema
const response = await openai.chat.completions.create({
  model: 'gpt-4o',
  messages: [{ role: 'user', content: 'Extract info from: John Smith, 35, Software Engineer at Google, lives in SF' }],
  response_format: {
    type: 'json_schema',
    json_schema: {
      name: 'person_info',
      schema: {
        type: 'object',
        properties: {
          name: { type: 'string' },
          age: { type: 'number' },
          job_title: { type: 'string' },
          company: { type: 'string' },
          city: { type: 'string' },
        },
        required: ['name', 'age', 'job_title', 'company', 'city'],
      },
    },
  },
})
// Guaranteed to match the schema
const person = JSON.parse(response.choices[0].message.content!)
```

## Examples

### Example 1: Build a customer support chatbot with function calling
**User prompt:** "Build a chatbot for our e-commerce site that can check order status, search products, and answer FAQs using our knowledge base."

The agent will:
1. Define tools for `check_order_status`, `search_products`, and `search_knowledge_base`.
2. Create a chat endpoint with streaming for real-time responses.
3. Implement the tool execution loop (GPT calls tool → execute → send result back).
4. Add conversation history management for multi-turn interactions.

### Example 2: Build a document analysis pipeline
**User prompt:** "Users upload contracts (PDF/images). Extract key terms, dates, and parties, then generate a structured summary."

The agent will:
1. Use vision API to extract text from document images.
2. Use structured outputs to extract entities into a typed JSON schema.
3. Generate a plain-language summary with a system prompt tuned for legal documents.

## Guidelines

- Use `gpt-4o` for most tasks — it's the best balance of quality, speed, and cost. Use `gpt-4o-mini` for simple tasks where cost matters.
- Always set a `system` message to define the assistant's behavior, constraints, and output format.
- Use structured outputs (`response_format: json_schema`) when you need reliable JSON — it guarantees schema compliance, unlike asking for JSON in the prompt.
- For function calling, define clear `description` fields — GPT uses these to decide when to call each function.
- Use streaming for any user-facing chat interface — the perceived latency drops dramatically when users see tokens appear in real-time.
- Set `temperature: 0` for deterministic tasks (classification, extraction, code generation) and `0.7-1.0` for creative tasks (writing, brainstorming).
- Track token usage via `response.usage` for cost monitoring. Embedding and completion costs differ significantly.

Related Skills

OpenAI Whisper API (curl)

25
from ComeOnOliver/skillshub

Transcribe an audio file via OpenAI’s `/v1/audio/transcriptions` endpoint.

OpenAI Image Gen

25
from ComeOnOliver/skillshub

Generate a handful of “random but structured” prompts and render them via the OpenAI Images API.

azure-ai-openai-dotnet

25
from ComeOnOliver/skillshub

Azure OpenAI SDK for .NET. Client library for Azure OpenAI and OpenAI services. Use for chat completions, embeddings, image generation, audio transcription, and assistants. Triggers: "Azure OpenAI", "AzureOpenAIClient", "ChatClient", "chat completions .NET", "GPT-4", "embeddings", "DALL-E", "Whisper", "OpenAI .NET".

scaffolding-openai-agents

25
from ComeOnOliver/skillshub

Builds AI agents using OpenAI Agents SDK with async/await patterns and multi-agent orchestration. Use when creating tutoring agents, building agent handoffs, implementing tool-calling agents, or orchestrating multiple specialists. Covers Agent class, Runner patterns, function tools, guardrails, and streaming responses. NOT when using raw OpenAI API without SDK or other agent frameworks like LangChain.

openai-docs-skill

25
from ComeOnOliver/skillshub

Query the OpenAI developer documentation via the OpenAI Docs MCP server using CLI (curl/jq). Use whenever a task involves the OpenAI API (Responses, Chat Completions, Realtime, etc.), OpenAI SDKs, ChatGPT Apps SDK, Codex, MCP integrations, endpoint schemas, parameters, limits, or migrations and you need up-to-date official guidance.

OpenAI Realtime API — Voice-Native AI Conversations

25
from ComeOnOliver/skillshub

## Overview

OpenAI Codex CLI — AI Coding Agent in Your Terminal

25
from ComeOnOliver/skillshub

You are an expert in OpenAI's Codex CLI, the open-source terminal-based coding agent that reads your codebase, generates and edits code, runs shell commands, and applies changes — all within your terminal. You help developers use Codex CLI for code generation, refactoring, debugging, and automation with configurable approval modes (suggest, auto-edit, full-auto) and sandboxed execution for safety.

OpenAI Agents SDK — Build Production AI Agents

25
from ComeOnOliver/skillshub

You are an expert in the OpenAI Agents SDK (formerly Swarm), the official framework for building multi-agent systems. You help developers create agents with tool calling, guardrails, agent handoffs, streaming, tracing, and MCP integration — building production-grade AI agents that coordinate, delegate tasks, and execute tools with built-in safety controls.

LocalAI — Self-Hosted OpenAI Alternative

25
from ComeOnOliver/skillshub

## Overview

OpenAI Automation

25
from ComeOnOliver/skillshub

Automate OpenAI API operations -- generate responses with multimodal and structured output support, create embeddings, generate images, and list models via the Composio MCP integration.

Daily Logs

25
from ComeOnOliver/skillshub

Record the user's daily activities, progress, decisions, and learnings in a structured, chronological format.

Socratic Method: The Dialectic Engine

25
from ComeOnOliver/skillshub

This skill transforms Claude into a Socratic agent — a cognitive partner who guides