Best use case
LM Studio Subagents is best used when you need a repeatable AI agent workflow instead of a one-off prompt.
## Overview
Teams using LM Studio Subagents should expect a more consistent output, faster repeated execution, less prompt rewriting.
When to use this skill
- You want a reusable workflow that can be run more than once with consistent structure.
When not to use this skill
- You only need a quick one-off answer and do not need a reusable workflow.
- You cannot install or maintain the underlying files, dependencies, or repository context.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/lm-studio-subagents/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How LM Studio Subagents Compares
| Feature / Agent | LM Studio Subagents | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | Unknown | N/A |
Frequently Asked Questions
What does this skill do?
## Overview
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
SKILL.md Source
# LM Studio Subagents
## Overview
Offload LLM tasks to local models running in LM Studio to save API costs and maintain privacy. LM Studio provides an OpenAI-compatible API for local models, making it a drop-in replacement for cloud LLM calls. Use local models for high-volume, lower-complexity tasks like summarization, extraction, classification, and reformatting while reserving cloud APIs for complex reasoning.
## Instructions
When a user wants to use local models via LM Studio, determine the task:
### Task A: Set up LM Studio as a local API server
1. Download and install LM Studio from `https://lmstudio.ai/`
2. Download a model through the LM Studio UI (recommended starting models):
- `lmstudio-community/Llama-3.1-8B-Instruct-GGUF` (general purpose)
- `lmstudio-community/Mistral-7B-Instruct-v0.3-GGUF` (fast inference)
- `lmstudio-community/Qwen2.5-7B-Instruct-GGUF` (multilingual)
3. Start the local server:
- Open LM Studio, go to the "Developer" tab
- Load a model and click "Start Server"
- Server runs at `http://localhost:1234` by default
4. Verify the server is running:
```bash
curl http://localhost:1234/v1/models
```
### Task B: Call LM Studio from Python (OpenAI-compatible)
```python
from openai import OpenAI
# Point to local LM Studio server
client = OpenAI(
base_url="http://localhost:1234/v1",
api_key="lm-studio", # Any string works
)
def ask_local(prompt: str, system: str = "You are a helpful assistant.") -> str:
response = client.chat.completions.create(
model="loaded-model", # LM Studio ignores this, uses loaded model
messages=[
{"role": "system", "content": system},
{"role": "user", "content": prompt},
],
temperature=0.3,
max_tokens=1024,
)
return response.choices[0].message.content
# Example usage
result = ask_local("Summarize this text in 2 sentences: ...")
print(result)
```
### Task C: Create task-specific subagents
```python
from openai import OpenAI
client = OpenAI(base_url="http://localhost:1234/v1", api_key="lm-studio")
class LocalSubagent:
def __init__(self, system_prompt: str, temperature: float = 0.2):
self.system_prompt = system_prompt
self.temperature = temperature
def run(self, user_input: str) -> str:
response = client.chat.completions.create(
model="loaded-model",
messages=[
{"role": "system", "content": self.system_prompt},
{"role": "user", "content": user_input},
],
temperature=self.temperature,
max_tokens=2048,
)
return response.choices[0].message.content
# Define specialized subagents
summarizer = LocalSubagent(
system_prompt="You are a summarization expert. Produce concise 2-3 sentence summaries."
)
classifier = LocalSubagent(
system_prompt="Classify the input into one of these categories: billing, technical, general, urgent. Respond with only the category name.",
temperature=0.0,
)
extractor = LocalSubagent(
system_prompt="Extract all named entities (people, organizations, dates, amounts) from the text. Return as JSON.",
temperature=0.0,
)
# Use the subagents
summary = summarizer.run("Long document text here...")
category = classifier.run("I can't log into my account and I need to submit a report by EOD")
entities = extractor.run("John Smith signed a $50,000 contract with Acme Corp on March 15, 2025")
```
### Task D: Batch processing with local models
```python
import asyncio
from openai import AsyncOpenAI
client = AsyncOpenAI(base_url="http://localhost:1234/v1", api_key="lm-studio")
async def process_batch(items: list[str], system_prompt: str, max_concurrent: int = 4) -> list[str]:
semaphore = asyncio.Semaphore(max_concurrent)
async def process_one(text: str) -> str:
async with semaphore:
response = await client.chat.completions.create(
model="loaded-model",
messages=[
{"role": "system", "content": system_prompt},
{"role": "user", "content": text},
],
temperature=0.2,
max_tokens=512,
)
return response.choices[0].message.content
tasks = [process_one(item) for item in items]
return await asyncio.gather(*tasks)
# Batch summarize 100 documents
documents = ["doc1 text...", "doc2 text...", ...] # 100 documents
summaries = asyncio.run(process_batch(
documents,
system_prompt="Summarize in 2 sentences.",
max_concurrent=2, # LM Studio handles one request at a time by default
))
```
### Task E: Cost comparison and routing strategy
Decide when to use local vs. cloud models:
| Task | Local Model | Cloud API | Recommendation |
|------|------------|-----------|----------------|
| Summarization | Good | Better | Local (save cost) |
| Classification | Good | Good | Local (save cost) |
| Data extraction | Moderate | Good | Local for simple, cloud for complex |
| Code generation | Moderate | Better | Cloud |
| Complex reasoning | Weak | Strong | Cloud |
| Translation | Good | Better | Local for common languages |
```python
def smart_route(task_type: str, text: str) -> str:
"""Route tasks between local and cloud models."""
local_tasks = {"summarize", "classify", "extract_simple", "reformat"}
if task_type in local_tasks:
return ask_local(text) # Free, local inference
else:
return ask_cloud(text) # Paid, cloud API
```
## Examples
### Example 1: Summarize 500 support tickets locally
**User request:** "Summarize all our support tickets from last month without API costs"
```python
tickets = load_tickets_from_csv("tickets.csv")
summaries = asyncio.run(process_batch(
[t["description"] for t in tickets],
system_prompt="Summarize this support ticket in one sentence. Include the main issue and any resolution.",
max_concurrent=2,
))
# Cost: $0 (vs ~$15 with GPT-4)
```
### Example 2: Classify incoming emails
**User request:** "Auto-classify emails into categories using a local model"
```python
classifier = LocalSubagent(
system_prompt="Classify this email into exactly one category: sales, support, spam, internal. Reply with only the category.",
temperature=0.0,
)
for email in emails:
category = classifier.run(email["subject"] + "\n" + email["body"])
email["category"] = category.strip().lower()
```
### Example 3: Extract structured data from documents
**User request:** "Extract names, dates, and amounts from these contracts"
```python
extractor = LocalSubagent(
system_prompt='Extract fields from the contract as JSON: {"parties": [], "date": "", "amount": "", "term": ""}',
temperature=0.0,
)
for doc in contracts:
data = extractor.run(doc["text"])
print(f"{doc['filename']}: {data}")
```
## Guidelines
- LM Studio processes one request at a time by default. Set `max_concurrent=1-2` for batch jobs.
- Use quantized models (Q4_K_M or Q5_K_M) for best speed-to-quality ratio on consumer hardware.
- 8B parameter models are the sweet spot for most extraction and classification tasks.
- Set `temperature=0.0` for deterministic tasks like classification and extraction.
- Test local model accuracy on a sample of 20-50 items before running full batches.
- For tasks where local models underperform, fall back to cloud APIs automatically.
- Keep LM Studio running as a background service for always-on local inference.
- Monitor RAM and VRAM usage; 7B models need ~6 GB RAM (quantized) or ~16 GB (full precision).Related Skills
mcp-copilot-studio-server-generator
Generate a complete MCP server implementation optimized for Copilot Studio integration with proper schema constraints and streamable HTTP support
flowstudio-power-automate-mcp
Connect to and operate Power Automate cloud flows via a FlowStudio MCP server. Use when asked to: list flows, read a flow definition, check run history, inspect action outputs, resubmit a run, cancel a running flow, view connections, get a trigger URL, validate a definition, monitor flow health, or any task that requires talking to the Power Automate API through an MCP tool. Also use for Power Platform environment discovery and connection management. Requires a FlowStudio MCP subscription or compatible server — see https://mcp.flowstudio.app
flowstudio-power-automate-debug
Debug failing Power Automate cloud flows using the FlowStudio MCP server. Load this skill when asked to: debug a flow, investigate a failed run, why is this flow failing, inspect action outputs, find the root cause of a flow error, fix a broken Power Automate flow, diagnose a timeout, trace a DynamicOperationRequestFailure, check connector auth errors, read error details from a run, or troubleshoot expression failures. Requires a FlowStudio MCP subscription — see https://mcp.flowstudio.app
flowstudio-power-automate-build
Build, scaffold, and deploy Power Automate cloud flows using the FlowStudio MCP server. Load this skill when asked to: create a flow, build a new flow, deploy a flow definition, scaffold a Power Automate workflow, construct a flow JSON, update an existing flow's actions, patch a flow definition, add actions to a flow, wire up connections, or generate a workflow definition from scratch. Requires a FlowStudio MCP subscription — see https://mcp.flowstudio.app
looker-studio-bigquery
Design and configure Looker Studio dashboards with BigQuery data sources. Use when creating analytics dashboards, connecting BigQuery to visualization tools, or optimizing data pipeline performance. Handles BigQuery connections, custom SQL queries, scheduled queries, dashboard design, and performance optimization.
testing-skills-with-subagents
Use when creating or editing skills, before deployment, to verify they work under pressure and resist rationalization - applies RED-GREEN-REFACTOR cycle to process documentation by running baseline without skill, writing to address failures, iterating to close loopholes
Label Studio
## Installation
Drizzle Studio
## Overview
Chaos Studio Skill
This skill provides expert guidance for Chaos Studio. Covers troubleshooting, limits & quotas, security, configuration, and integrations & coding patterns. It combines local quick-reference content with remote documentation fetching capabilities.
dreamstudio-automation
Automate Dreamstudio tasks via Rube MCP (Composio). Always search tools first for current schemas.
Daily Logs
Record the user's daily activities, progress, decisions, and learnings in a structured, chronological format.
Socratic Method: The Dialectic Engine
This skill transforms Claude into a Socratic agent — a cognitive partner who guides