lm-studio-subagents

Offload tasks to local LLMs via LM Studio. Use when a user asks to run local models with LM Studio, save API costs by using local LLMs, create subagents with local models, offload summarization or classification to a local model, or use LM Studio's API for batch processing. Covers local model inference, task delegation, and cost optimization.

26 stars

Best use case

lm-studio-subagents is best used when you need a repeatable AI agent workflow instead of a one-off prompt.

Offload tasks to local LLMs via LM Studio. Use when a user asks to run local models with LM Studio, save API costs by using local LLMs, create subagents with local models, offload summarization or classification to a local model, or use LM Studio's API for batch processing. Covers local model inference, task delegation, and cost optimization.

Teams using lm-studio-subagents should expect a more consistent output, faster repeated execution, less prompt rewriting.

When to use this skill

  • You want a reusable workflow that can be run more than once with consistent structure.

When not to use this skill

  • You only need a quick one-off answer and do not need a reusable workflow.
  • You cannot install or maintain the underlying files, dependencies, or repository context.

Installation

Claude Code / Cursor / Codex

$curl -o ~/.claude/skills/lm-studio-subagents/SKILL.md --create-dirs "https://raw.githubusercontent.com/TerminalSkills/skills/main/skills/lm-studio-subagents/SKILL.md"

Manual Installation

  1. Download SKILL.md from GitHub
  2. Place it in .claude/skills/lm-studio-subagents/SKILL.md inside your project
  3. Restart your AI agent — it will auto-discover the skill

How lm-studio-subagents Compares

Feature / Agentlm-studio-subagentsStandard Approach
Platform SupportNot specifiedLimited / Varies
Context Awareness High Baseline
Installation ComplexityUnknownN/A

Frequently Asked Questions

What does this skill do?

Offload tasks to local LLMs via LM Studio. Use when a user asks to run local models with LM Studio, save API costs by using local LLMs, create subagents with local models, offload summarization or classification to a local model, or use LM Studio's API for batch processing. Covers local model inference, task delegation, and cost optimization.

Where can I find the source code?

You can find the source code on GitHub using the link provided at the top of the page.

SKILL.md Source

# LM Studio Subagents

## Overview

Offload LLM tasks to local models running in LM Studio to save API costs and maintain privacy. LM Studio provides an OpenAI-compatible API for local models, making it a drop-in replacement for cloud LLM calls. Use local models for high-volume, lower-complexity tasks like summarization, extraction, classification, and reformatting while reserving cloud APIs for complex reasoning.

## Instructions

When a user wants to use local models via LM Studio, determine the task:

### Task A: Set up LM Studio as a local API server

1. Download and install LM Studio from `https://lmstudio.ai/`
2. Download a model through the LM Studio UI (recommended starting models):
   - `lmstudio-community/Llama-3.1-8B-Instruct-GGUF` (general purpose)
   - `lmstudio-community/Mistral-7B-Instruct-v0.3-GGUF` (fast inference)
   - `lmstudio-community/Qwen2.5-7B-Instruct-GGUF` (multilingual)

3. Start the local server:
   - Open LM Studio, go to the "Developer" tab
   - Load a model and click "Start Server"
   - Server runs at `http://localhost:1234` by default

4. Verify the server is running:

```bash
curl http://localhost:1234/v1/models
```

### Task B: Call LM Studio from Python (OpenAI-compatible)

```python
from openai import OpenAI

# Point to local LM Studio server
client = OpenAI(
    base_url="http://localhost:1234/v1",
    api_key="lm-studio",  # Any string works
)

def ask_local(prompt: str, system: str = "You are a helpful assistant.") -> str:
    response = client.chat.completions.create(
        model="loaded-model",  # LM Studio ignores this, uses loaded model
        messages=[
            {"role": "system", "content": system},
            {"role": "user", "content": prompt},
        ],
        temperature=0.3,
        max_tokens=1024,
    )
    return response.choices[0].message.content

# Example usage
result = ask_local("Summarize this text in 2 sentences: ...")
print(result)
```

### Task C: Create task-specific subagents

```python
from openai import OpenAI

client = OpenAI(base_url="http://localhost:1234/v1", api_key="lm-studio")

class LocalSubagent:
    def __init__(self, system_prompt: str, temperature: float = 0.2):
        self.system_prompt = system_prompt
        self.temperature = temperature

    def run(self, user_input: str) -> str:
        response = client.chat.completions.create(
            model="loaded-model",
            messages=[
                {"role": "system", "content": self.system_prompt},
                {"role": "user", "content": user_input},
            ],
            temperature=self.temperature,
            max_tokens=2048,
        )
        return response.choices[0].message.content

# Define specialized subagents
summarizer = LocalSubagent(
    system_prompt="You are a summarization expert. Produce concise 2-3 sentence summaries."
)

classifier = LocalSubagent(
    system_prompt="Classify the input into one of these categories: billing, technical, general, urgent. Respond with only the category name.",
    temperature=0.0,
)

extractor = LocalSubagent(
    system_prompt="Extract all named entities (people, organizations, dates, amounts) from the text. Return as JSON.",
    temperature=0.0,
)

# Use the subagents
summary = summarizer.run("Long document text here...")
category = classifier.run("I can't log into my account and I need to submit a report by EOD")
entities = extractor.run("John Smith signed a $50,000 contract with Acme Corp on March 15, 2025")
```

### Task D: Batch processing with local models

```python
import asyncio
from openai import AsyncOpenAI

client = AsyncOpenAI(base_url="http://localhost:1234/v1", api_key="lm-studio")

async def process_batch(items: list[str], system_prompt: str, max_concurrent: int = 4) -> list[str]:
    semaphore = asyncio.Semaphore(max_concurrent)

    async def process_one(text: str) -> str:
        async with semaphore:
            response = await client.chat.completions.create(
                model="loaded-model",
                messages=[
                    {"role": "system", "content": system_prompt},
                    {"role": "user", "content": text},
                ],
                temperature=0.2,
                max_tokens=512,
            )
            return response.choices[0].message.content

    tasks = [process_one(item) for item in items]
    return await asyncio.gather(*tasks)

# Batch summarize 100 documents
documents = ["doc1 text...", "doc2 text...", ...]  # 100 documents
summaries = asyncio.run(process_batch(
    documents,
    system_prompt="Summarize in 2 sentences.",
    max_concurrent=2,  # LM Studio handles one request at a time by default
))
```

### Task E: Cost comparison and routing strategy

Decide when to use local vs. cloud models:

| Task | Local Model | Cloud API | Recommendation |
|------|------------|-----------|----------------|
| Summarization | Good | Better | Local (save cost) |
| Classification | Good | Good | Local (save cost) |
| Data extraction | Moderate | Good | Local for simple, cloud for complex |
| Code generation | Moderate | Better | Cloud |
| Complex reasoning | Weak | Strong | Cloud |
| Translation | Good | Better | Local for common languages |

```python
def smart_route(task_type: str, text: str) -> str:
    """Route tasks between local and cloud models."""
    local_tasks = {"summarize", "classify", "extract_simple", "reformat"}

    if task_type in local_tasks:
        return ask_local(text)  # Free, local inference
    else:
        return ask_cloud(text)  # Paid, cloud API
```

## Examples

### Example 1: Summarize 500 support tickets locally

**User request:** "Summarize all our support tickets from last month without API costs"

```python
tickets = load_tickets_from_csv("tickets.csv")
summaries = asyncio.run(process_batch(
    [t["description"] for t in tickets],
    system_prompt="Summarize this support ticket in one sentence. Include the main issue and any resolution.",
    max_concurrent=2,
))
# Cost: $0 (vs ~$15 with GPT-4)
```

### Example 2: Classify incoming emails

**User request:** "Auto-classify emails into categories using a local model"

```python
classifier = LocalSubagent(
    system_prompt="Classify this email into exactly one category: sales, support, spam, internal. Reply with only the category.",
    temperature=0.0,
)
for email in emails:
    category = classifier.run(email["subject"] + "\n" + email["body"])
    email["category"] = category.strip().lower()
```

### Example 3: Extract structured data from documents

**User request:** "Extract names, dates, and amounts from these contracts"

```python
extractor = LocalSubagent(
    system_prompt='Extract fields from the contract as JSON: {"parties": [], "date": "", "amount": "", "term": ""}',
    temperature=0.0,
)
for doc in contracts:
    data = extractor.run(doc["text"])
    print(f"{doc['filename']}: {data}")
```

## Guidelines

- LM Studio processes one request at a time by default. Set `max_concurrent=1-2` for batch jobs.
- Use quantized models (Q4_K_M or Q5_K_M) for best speed-to-quality ratio on consumer hardware.
- 8B parameter models are the sweet spot for most extraction and classification tasks.
- Set `temperature=0.0` for deterministic tasks like classification and extraction.
- Test local model accuracy on a sample of 20-50 items before running full batches.
- For tasks where local models underperform, fall back to cloud APIs automatically.
- Keep LM Studio running as a background service for always-on local inference.
- Monitor RAM and VRAM usage; 7B models need ~6 GB RAM (quantized) or ~16 GB (full precision).

Related Skills

label-studio

26
from TerminalSkills/skills

Open-source data labeling and annotation platform for ML projects. Supports text, image, audio, video, and time-series data. Features configurable labeling interfaces, ML-assisted labeling, team collaboration, and API integration for automated workflows.

google-ai-studio

26
from TerminalSkills/skills

Google AI Studio and Gemini API for multimodal AI. Use when you need multimodal AI (text + image + video + audio), long context up to 1M tokens, code generation with Gemini, grounding with Google Search, or structured output with response schemas.

drizzle-studio

26
from TerminalSkills/skills

Explore and manage databases with Drizzle Studio. Use when a user asks to browse database contents visually, inspect tables and data, run ad-hoc queries, manage database records through a GUI, debug database issues, or use a lightweight alternative to pgAdmin or DBeaver. Covers setup with Drizzle ORM, standalone usage, data browsing, filtering, and inline editing.

zustand

26
from TerminalSkills/skills

You are an expert in Zustand, the small, fast, and scalable state management library for React. You help developers manage global state without boilerplate using Zustand's hook-based stores, selectors for performance, middleware (persist, devtools, immer), computed values, and async actions — replacing Redux complexity with a simple, un-opinionated API in under 1KB.

zoho

26
from TerminalSkills/skills

Integrate and automate Zoho products. Use when a user asks to work with Zoho CRM, Zoho Books, Zoho Desk, Zoho Projects, Zoho Mail, or Zoho Creator, build custom integrations via Zoho APIs, automate workflows with Deluge scripting, sync data between Zoho apps and external systems, manage leads and deals, automate invoicing, build custom Zoho Creator apps, set up webhooks, or manage Zoho organization settings. Covers Zoho CRM, Books, Desk, Projects, Creator, and cross-product integrations.

zod

26
from TerminalSkills/skills

You are an expert in Zod, the TypeScript-first schema declaration and validation library. You help developers define schemas that validate data at runtime AND infer TypeScript types at compile time — eliminating the need to write types and validators separately. Used for API input validation, form validation, environment variables, config files, and any data boundary.

zipkin

26
from TerminalSkills/skills

Deploy and configure Zipkin for distributed tracing and request flow visualization. Use when a user needs to set up trace collection, instrument Java/Spring or other services with Zipkin, analyze service dependencies, or configure storage backends for trace data.

zig

26
from TerminalSkills/skills

Expert guidance for Zig, the systems programming language focused on performance, safety, and readability. Helps developers write high-performance code with compile-time evaluation, seamless C interop, no hidden control flow, and no garbage collector. Zig is used for game engines, operating systems, networking, and as a C/C++ replacement.

zed

26
from TerminalSkills/skills

Expert guidance for Zed, the high-performance code editor built in Rust with native collaboration, AI integration, and GPU-accelerated rendering. Helps developers configure Zed, create custom extensions, set up collaborative editing sessions, and integrate AI assistants for productive coding.

zeabur

26
from TerminalSkills/skills

Expert guidance for Zeabur, the cloud deployment platform that auto-detects frameworks, builds and deploys applications with zero configuration, and provides managed services like databases and message queues. Helps developers deploy full-stack applications with automatic scaling and one-click marketplace services.

zapier

26
from TerminalSkills/skills

Automate workflows between apps with Zapier. Use when a user asks to connect apps without code, automate repetitive tasks, sync data between services, or build no-code integrations between SaaS tools.

zabbix

26
from TerminalSkills/skills

Configure Zabbix for enterprise infrastructure monitoring with templates, triggers, discovery rules, and dashboards. Use when a user needs to set up Zabbix server, configure host monitoring, create custom templates, define trigger expressions, or automate host discovery and registration.