langchain-middleware
INVOKE THIS SKILL when you need human-in-the-loop approval, custom middleware, or structured output. Covers HumanInTheLoopMiddleware for human approval of dangerous tool calls, creating custom middleware with hooks, Command resume patterns, and structured output with Pydantic/Zod.
Best use case
langchain-middleware is best used when you need a repeatable AI agent workflow instead of a one-off prompt.
INVOKE THIS SKILL when you need human-in-the-loop approval, custom middleware, or structured output. Covers HumanInTheLoopMiddleware for human approval of dangerous tool calls, creating custom middleware with hooks, Command resume patterns, and structured output with Pydantic/Zod.
Teams using langchain-middleware should expect a more consistent output, faster repeated execution, less prompt rewriting.
When to use this skill
- You want a reusable workflow that can be run more than once with consistent structure.
When not to use this skill
- You only need a quick one-off answer and do not need a reusable workflow.
- You cannot install or maintain the underlying files, dependencies, or repository context.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/langchain-middleware/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How langchain-middleware Compares
| Feature / Agent | langchain-middleware | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | Unknown | N/A |
Frequently Asked Questions
What does this skill do?
INVOKE THIS SKILL when you need human-in-the-loop approval, custom middleware, or structured output. Covers HumanInTheLoopMiddleware for human approval of dangerous tool calls, creating custom middleware with hooks, Command resume patterns, and structured output with Pydantic/Zod.
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
SKILL.md Source
<overview>
Middleware patterns for production LangChain agents:
- **HumanInTheLoopMiddleware** / **humanInTheLoopMiddleware**: Pause before dangerous tool calls for human approval
- **Custom middleware**: Intercept tool calls for error handling, logging, retry logic
- **Command resume**: Continue execution after human decisions (approve, edit, reject)
**Requirements:** Checkpointer + thread_id config for all HITL workflows.
</overview>
---
## Human-in-the-Loop
<ex-basic-hitl-setup>
<python>
Set up an agent with HITL middleware that pauses before sending emails for approval.
```python
from langchain.agents import create_agent
from langchain.agents.middleware import HumanInTheLoopMiddleware
from langgraph.checkpoint.memory import MemorySaver
from langchain.tools import tool
@tool
def send_email(to: str, subject: str, body: str) -> str:
"""Send an email."""
return f"Email sent to {to}"
agent = create_agent(
model="gpt-4.1",
tools=[send_email],
checkpointer=MemorySaver(), # Required for HITL
middleware=[
HumanInTheLoopMiddleware(
interrupt_on={
"send_email": {"allowed_decisions": ["approve", "edit", "reject"]},
}
)
],
)
```
</python>
<typescript>
Set up an agent with HITL that pauses before sending emails for human approval.
```typescript
import { createAgent, humanInTheLoopMiddleware } from "langchain";
import { MemorySaver } from "@langchain/langgraph";
import { tool } from "@langchain/core/tools";
import { z } from "zod";
const sendEmail = tool(
async ({ to, subject, body }) => `Email sent to ${to}`,
{
name: "send_email",
description: "Send an email",
schema: z.object({ to: z.string(), subject: z.string(), body: z.string() }),
}
);
const agent = createAgent({
model: "anthropic:claude-sonnet-4-5",
tools: [sendEmail],
checkpointer: new MemorySaver(),
middleware: [
humanInTheLoopMiddleware({
interruptOn: { send_email: { allowedDecisions: ["approve", "edit", "reject"] } },
}),
],
});
```
</typescript>
</ex-basic-hitl-setup>
<ex-running-with-interrupts>
<python>
Run the agent, detect an interrupt, then resume execution after human approval.
```python
from langgraph.types import Command
config = {"configurable": {"thread_id": "session-1"}}
# Step 1: Agent runs until it needs to call tool
result1 = agent.invoke({
"messages": [{"role": "user", "content": "Send email to john@example.com"}]
}, config=config)
# Check for interrupt
if "__interrupt__" in result1:
print(f"Waiting for approval: {result1['__interrupt__']}")
# Step 2: Human approves
result2 = agent.invoke(
Command(resume={"decisions": [{"type": "approve"}]}),
config=config
)
```
</python>
<typescript>
Run the agent, detect an interrupt, then resume execution after human approval.
```typescript
import { Command } from "@langchain/langgraph";
const config = { configurable: { thread_id: "session-1" } };
// Step 1: Agent runs until it needs to call tool
const result1 = await agent.invoke({
messages: [{ role: "user", content: "Send email to john@example.com" }]
}, config);
// Check for interrupt
if (result1.__interrupt__) {
console.log(`Waiting for approval: ${result1.__interrupt__}`);
}
// Step 2: Human approves
const result2 = await agent.invoke(
new Command({ resume: { decisions: [{ type: "approve" }] } }),
config
);
```
</typescript>
</ex-running-with-interrupts>
<ex-editing-tool-arguments>
<python>
Edit the tool arguments before approving when the original values need correction.
```python
# Human edits the arguments — edited_action must include name + args
result2 = agent.invoke(
Command(resume={
"decisions": [{
"type": "edit",
"edited_action": {
"name": "send_email",
"args": {
"to": "alice@company.com", # Fixed email
"subject": "Project Meeting - Updated",
"body": "...",
},
},
}]
}),
config=config
)
```
</python>
<typescript>
Edit the tool arguments before approving when the original values need correction.
```typescript
// Human edits the arguments — editedAction must include name + args
const result2 = await agent.invoke(
new Command({
resume: {
decisions: [{
type: "edit",
editedAction: {
name: "send_email",
args: {
to: "alice@company.com", // Fixed email
subject: "Project Meeting - Updated",
body: "...",
},
},
}]
}
}),
config
);
```
</typescript>
</ex-editing-tool-arguments>
<ex-rejecting-with-feedback>
<python>
Reject a tool call and provide feedback explaining why it was rejected.
```python
# Human rejects
result2 = agent.invoke(
Command(resume={
"decisions": [{
"type": "reject",
"feedback": "Cannot delete customer data without manager approval",
}]
}),
config=config
)
```
</python>
</ex-rejecting-with-feedback>
<ex-multiple-tools-different-policies>
<python>
Configure different HITL policies for each tool based on risk level.
```python
agent = create_agent(
model="gpt-4.1",
tools=[send_email, read_email, delete_email],
checkpointer=MemorySaver(),
middleware=[
HumanInTheLoopMiddleware(
interrupt_on={
"send_email": {"allowed_decisions": ["approve", "edit", "reject"]},
"delete_email": {"allowed_decisions": ["approve", "reject"]}, # No edit
"read_email": False, # No HITL for reading
}
)
],
)
```
</python>
</ex-multiple-tools-different-policies>
<boundaries>
### What You CAN Configure
- Which tools require approval (per-tool policies)
- Allowed decisions per tool (approve, edit, reject)
- Custom middleware hooks: `before_model`, `after_model`, `wrap_tool_call`, `before_agent`, `after_agent`
- Tool-specific middleware (apply only to certain tools)
### What You CANNOT Configure
- Interrupt after tool execution (must be before)
- Skip checkpointer requirement for HITL
</boundaries>
<fix-missing-checkpointer>
<python>
HITL middleware requires a checkpointer to persist state.
```python
# WRONG
agent = create_agent(model="gpt-4.1", tools=[send_email], middleware=[HumanInTheLoopMiddleware({...})])
# CORRECT
agent = create_agent(
model="gpt-4.1", tools=[send_email],
checkpointer=MemorySaver(), # Required
middleware=[HumanInTheLoopMiddleware({...})]
)
```
</python>
<typescript>
HITL requires a checkpointer to persist state.
```typescript
// WRONG: No checkpointer
const agent = createAgent({
model: "anthropic:claude-sonnet-4-5", tools: [sendEmail],
middleware: [humanInTheLoopMiddleware({ interruptOn: { send_email: true } })],
});
// CORRECT: Add checkpointer
const agent = createAgent({
model: "anthropic:claude-sonnet-4-5", tools: [sendEmail],
checkpointer: new MemorySaver(),
middleware: [humanInTheLoopMiddleware({ interruptOn: { send_email: true } })],
});
```
</typescript>
</fix-missing-checkpointer>
<fix-no-thread-id>
<python>
Always provide thread_id when using HITL to track conversation state.
```python
# WRONG
agent.invoke(input) # No config!
# CORRECT
agent.invoke(input, config={"configurable": {"thread_id": "user-123"}})
```
</python>
</fix-no-thread-id>
<fix-wrong-resume-syntax>
<python>
Use Command class to resume execution after an interrupt.
```python
# WRONG
agent.invoke({"resume": {"decisions": [...]}})
# CORRECT
from langgraph.types import Command
agent.invoke(Command(resume={"decisions": [{"type": "approve"}]}), config=config)
```
</python>
<typescript>
Use Command class to resume execution after an interrupt.
```typescript
// WRONG
await agent.invoke({ resume: { decisions: [...] } });
// CORRECT
import { Command } from "@langchain/langgraph";
await agent.invoke(new Command({ resume: { decisions: [{ type: "approve" }] } }), config);
```
</typescript>
</fix-wrong-resume-syntax>Related Skills
gin-middleware-creator
Gin Middleware Creator - Auto-activating skill for Backend Development. Triggers on: gin middleware creator, gin middleware creator Part of the Backend Development skill category.
error-handler-middleware
Error Handler Middleware - Auto-activating skill for Backend Development. Triggers on: error handler middleware, error handler middleware Part of the Backend Development skill category.
llm-application-dev-langchain-agent
You are an expert LangChain agent developer specializing in production-grade AI systems using LangChain 0.1+ and LangGraph.
LangChain
## Overview
laravel-sessions-middleware
Expert standards for session drivers, security headers, and middleware logic. Use when configuring session drivers, security headers, or custom middleware in Laravel. (triggers: app/Http/Middleware/**/*.php, config/session.php, session, driver, handle, headers, csrf)
langchain-rag
INVOKE THIS SKILL when building ANY retrieval-augmented generation (RAG) system. Covers document loaders, RecursiveCharacterTextSplitter, embeddings (OpenAI), and vector stores (Chroma, FAISS, Pinecone).
langchain-fundamentals
Create LangChain agents with create_agent, define tools, and use middleware for human-in-the-loop and error handling.
langchain-dependencies
INVOKE THIS SKILL when setting up a new project or when asked about package versions, installation, or dependency management for LangChain, LangGraph, LangSmith, or Deep Agents. Covers required packages, minimum versions, environment requirements, versioning best practices, and common community tool packages for both Python and TypeScript.
langchain-architecture
Design LLM applications using LangChain 1.x and LangGraph for agents, memory, and tool integration. Use when building LangChain applications, implementing AI agents, or creating complex LLM workflows.
graphviz-dot-generator
Graphviz Dot Generator - Auto-activating skill for Visual Content. Triggers on: graphviz dot generator, graphviz dot generator Part of the Visual Content skill category.
graphql-subscription-setup
Graphql Subscription Setup - Auto-activating skill for API Development. Triggers on: graphql subscription setup, graphql subscription setup Part of the API Development skill category.
graphql-schema-generator
Graphql Schema Generator - Auto-activating skill for API Development. Triggers on: graphql schema generator, graphql schema generator Part of the API Development skill category.