Two pains bite every team reproducing LangChain's late-2025 Deep Agents blueprint.
Virtual-FS state grows unboundedly (P51). The planner and every subagent
write plans, scratch notes, intermediate drafts, and tool outputs into
state["files"]. Nothing ever evicts them. After 50 tool calls, the checkpointed
state is 8 MB; every MemorySaver.put() takes 400 ms; a run that started
at 1.2 s per node visit ends at 2.5 s per node visit. The LangSmith trace viewer
times out loading the thread. The user sees latency doubling over the run with
no obvious tool-level culprit.
Subagent persona leak (P52). The naive prompt-composition inside the
blueprint APPENDS the subagent role message to the parent's system message
instead of replacing it. The research-specialist subagent receives:
"You are a senior planner coordinating subagents..." + "You are a research specialist..." — and responds as the planner. It produces generic task
decomposition instead of the specific lookup you asked for. The bug is invisible
in unit tests because both messages "sound right" to a reviewer.
This skill pins to langgraph 1.0.x + langchain-core 1.0.x and walks through
the four-component Deep Agent pattern — planner, subagent pool of 3-8
role-specialized workers, virtual filesystem with eviction, reflection
node with bounded depth 3-5 — and shows exactly how to avoid P51 (cleanup
node + checkpoint-on-boundary) and P52 (explicit SystemMessage(override=True)
for every subagent). Pain-catalog anchors: P51, P52.
langgraph >= 1.0, < 2.0 and langchain-core >= 1.0, < 2.0
pip install langchain-anthropic or langchain-openai
langchain-langgraph-agents (L26) — you already know create_react_agent,
tool schemas, recursion limitslangchain-langgraph-subgraphs (L30) — subagent ≈ subgraph with a bounded
contract; if L30 is not yet installed, the subagent construction in Step 3
is self-containedANTHROPIC_API_KEY or OPENAI_API_KEY
langchain-eval-harness skill installed for trajectory-level evalA Deep Agent has four components. Each has a fixed contract; violating a contract is exactly where P51 / P52 show up.
| Component | Input | Output | Invariants |
|---|---|---|---|
| Planner | User goal, current state["plan"], current state["files"] summary (not full contents) |
An ordered list of subtasks; each subtask has {subagent_role, instruction, expected_artifact_name} |
Must NOT write to state["files"] directly. Only emits plan + assignments. |
| Subagent | {subagent_role, instruction, read_files: [names]} from planner |
{artifact_name, content, status} back to planner via structured output |
Receives a fresh SystemMessage with override=True — no parent prompt inheritance (P52). Typical pool size: 3-8. |
| Virtual FS | Writes from subagents (never from planner) | Reads by planner (summaries) and subagents (full content) | Bounded. Cleanup node evicts entries older than N steps or status=="done" (P51). |
| Reflection | Plan vs actual artifacts produced, subagent errors, step count | Decision: continue / replan / end / escalate_to_human |
Runs at most 3-5 times per user-facing turn. |
The graph topology:
START -> planner -> (for each subtask) subagent_dispatcher -> virtual_fs_write
|
v
reflection -> planner (replan)
-> END (done)
-> interrupt (escalate)
The cleanup node is an edge-less side-effect node hooked onto the reflection
transition — it prunes state["files"] before the next planner step runs.
The planner's prompt must be narrow. It decomposes, assigns, and revises — that is all. It does not answer the user's question itself.
PLANNER_SYSTEM = """You are the planner for a Deep Agent.
Your job is to decompose the user's goal into subtasks and assign each to a
specialized subagent. You do NOT execute subtasks yourself.
Available subagent roles: {roles_list}
Return a JSON plan:
{{
"subtasks": [
{{"role": "research-specialist",
"instruction": "Find the most recent SEC 10-K filing for ACME.",
"expected_artifact": "acme_10k_summary.md",
"read_files": []}}
],
"reasoning": "Why this decomposition."
}}
Do not write file contents. Do not answer the user directly.
"""
Keep the planner system prompt under ~1500 chars. Long planner prompts leak
into subagents if the override=True contract in Step 3 is skipped.
This is the fix for P52. Never rely on default prompt composition for
subagents. Always build a fresh SystemMessage and pass override=True.
from langchain_core.messages import SystemMessage, HumanMessage
from langgraph.prebuilt import create_react_agent
from langchain_anthropic import ChatAnthropic
SUBAGENT_PROMPTS = {
"research-specialist": (
"You are a research specialist. Given an instruction, produce a "
"fact-dense summary with inline citations. Return ONLY the summary. "
"Do NOT plan, do NOT delegate."
),
"code-writer": (
"You are a code writer. Given a spec, produce runnable Python. "
"Return ONLY code in one fenced block. Do NOT explain."
),
"critic": (
"You are a critic. Given an artifact and a spec, list concrete defects. "
"Return a JSON list of {line, issue, severity}. Do NOT rewrite the artifact."
),
}
def build_subagent(role: str, model):
return create_react_agent(
model=model,
tools=ROLE_TOOLS[role],
# KEY: prompt parameter replaces default state_modifier; override=True
# means no parent-prompt composition.
prompt=SystemMessage(content=SUBAGENT_PROMPTS[role]),
)
def invoke_subagent(subagent, instruction: str, read_files: dict):
# Build messages from scratch — do not reuse parent message list.
context = "\n\n".join(f"# {name}\n{content}" for name, content in read_files.items())
messages = [HumanMessage(content=f"{context}\n\n## Task\n{instruction}")]
result = subagent.invoke({"messages": messages})
return result["messages"][-1].content
Rules:
HumanMessage(task).prompt=SystemMessage(...) on create_react_agent replaces the default
state_modifier. On LangGraph 1.0.x this is the supported way to pin a
subagent's persona.See Subagent Prompting for the full override-vs-append test, the handoff structured-output schema, and how to unit-test for P52 persona leak.
This is the fix for P51. The virtual FS lives in the graph state dict for small artifacts (< 100 KB) and on real disk / an object store for large ones. A cleanup node evicts old entries before every planner step.
from typing import TypedDict, Annotated
from operator import or_ # merge dict state updates
class DeepAgentState(TypedDict):
messages: list
plan: dict
files: Annotated[dict, or_] # {name: {content, written_at_step, status}}
step: int
reflection_depth: int
MAX_FILE_AGE_STEPS = 20
INLINE_SIZE_LIMIT_BYTES = 100 * 1024 # 100 KB — above this, spill to disk; keeps state < 500 KB (P51)
def cleanup_node(state: DeepAgentState) -> dict:
current_step = state["step"]
kept = {}
for name, entry in state["files"].items():
age = current_step - entry["written_at_step"]
if entry.get("status") == "done" and age > 3:
continue # evict completed
if age > MAX_FILE_AGE_STEPS:
continue # evict stale
kept[name] = entry
return {"files": kept}
def write_artifact(state, name: str, content: str) -> dict:
if len(content.encode("utf-8")) > INLINE_SIZE_LIMIT_BYTES:
# Spill to disk; keep only a pointer in state.
path = f"/tmp/deep_agent/{state['step']}_{name}"
with open(path, "w") as f: f.write(content)
entry = {"content": None, "disk_path": path, "written_at_step": state["step"], "status": "active"}
else:
entry = {"content": content, "written_at_step": state["step"], "status": "active"}
return {"files": {name: entry}}
See Virtual Filesystem Patterns for dict-backed vs disk-backed trade-offs, content-addressed storage for dedup, and a full P51 mitigation procedure with before/after latency numbers.
Reflection compares plan vs artifacts produced so far and decides what happens next. Typical reflection depth: 3-5 rounds per user-facing turn. Any higher and the agent is either spinning or should escalate.
REFLECTION_SYSTEM = """You are the reflection node of a Deep Agent.
Input: the current plan, the artifacts produced so far (by filename + status),
and the step count. Output a decision:
{"decision": "continue"} # more subtasks remain, proceed
{"decision": "replan", "reason": "..."} # current plan is wrong, replanner
{"decision": "end", "final_answer": "..."} # we are done
{"decision": "escalate", "question": "..."} # ask the human
Do not rewrite files. Do not invent facts. Base the decision only on what is
present in files and plan.
"""
MAX_REFLECTION_DEPTH = 5
def reflection_node(state: DeepAgentState, model) -> dict:
if state["reflection_depth"] >= MAX_REFLECTION_DEPTH:
return {"decision": "escalate", "question": "Max reflection depth reached."}
# Feed summaries, not full file contents, to the reflection model.
file_summary = {name: {"status": e["status"], "bytes": len(e.get("content") or "")}
for name, e in state["files"].items()}
msg = HumanMessage(content=f"Plan: {state['plan']}\nFiles: {file_summary}\nStep: {state['step']}")
result = model.invoke([SystemMessage(content=REFLECTION_SYSTEM), msg])
return {"decision": result.content, "reflection_depth": state["reflection_depth"] + 1}
See Reflection Loop for the plan-vs-actual
diff prompt, self-critique patterns, and the decision tree for replan vs
continue vs escalate.
Default MemorySaver checkpoints after every node visit. For a Deep Agent
that is disastrous: 50 node visits x 160 KB state = 8 MB of serialized state,
each write costing 400 ms (P51 root cause).
Fix: checkpoint only at user-facing boundaries — after reflection decides
end or escalate, not inside the planner/subagent loop. Use
interrupt_after=["reflection"] to get the boundary checkpoint, or switch to
a durable store (Postgres / Redis) and override put to no-op during internal
steps.
from langgraph.checkpoint.memory import MemorySaver
from langgraph.graph import StateGraph, END
checkpointer = MemorySaver()
graph = StateGraph(DeepAgentState)
graph.add_node("planner", planner_node)
graph.add_node("subagent_dispatcher", dispatch_node)
graph.add_node("cleanup", cleanup_node)
graph.add_node("reflection", reflection_node)
graph.set_entry_point("planner")
graph.add_edge("planner", "subagent_dispatcher")
graph.add_edge("subagent_dispatcher", "cleanup")
graph.add_edge("cleanup", "reflection")
graph.add_conditional_edges("reflection", route_after_reflection,
{"continue": "planner", "replan": "planner", "end": END, "escalate": END})
# Only interrupt on user-facing boundary — the checkpoint taken here is the
# one the user sees and can resume from.
app = graph.compile(checkpointer=checkpointer, interrupt_after=["reflection"])
Unit-testing individual nodes will NOT catch P52 (persona leak is observable only in the subagent's output) or P51 (state growth is observable only over a full run). You need trajectory-level evaluation.
Cross-link: if langchain-eval-harness is installed, use its trajectory-eval
pattern with a golden dataset of {goal, expected_final_answer, expected_artifact_set}
and assert on: (a) len(pickle.dumps(state)) at turn end < 500 KB, (b) every
subagent's first message begins with its role prefix, (c) reflection depth
terminated at end or escalate (not hit MAX_REFLECTION_DEPTH silently).
planner, subagent_dispatcher, cleanup, reflection
create_react_agent(model, tools, prompt=SystemMessage(...)) — no parent-prompt
inheritance (P52 fix)state["files"] with eviction by age (>20 steps) and status
(done + age > 3), disk spill for entries > 100 KB (P51 fix)MAX_REFLECTION_DEPTH=5; escalates to human on capreflection — user-facing boundary — not
every node visitlangchain-eval-harness
state["files"] entries carry {written_at_step, status} — not bare contentcleanup_node runs before every planner re-entryinterrupt_after=["reflection"] — checkpoint only on user-facing boundary{status, bytes} — not raw file contents — to the modellen(pickle.dumps(state)) asserted < 500 KB in trajectory testsstate["messages"] accumulation across user turns — reset on boundary| Error | Cause | Fix |
|---|---|---|
| Subagent response reads like the planner (generic decomposition instead of the specific task) | P52 — default prompt composition appended parent system message to subagent's role message | Pass prompt=SystemMessage(content=SUBAGENT_PROMPTS[role]) to create_react_agent; build messages from scratch per invocation, do not pass parent history |
MemorySaver.put() latency climbs from 20 ms at step 5 to 400 ms at step 50 |
P51 — state["files"] grew to MB-scale and is re-serialized every node |
Add cleanup_node with age + status eviction; spill > 100 KB entries to disk; switch to interrupt_after=["reflection"] so checkpoint fires only at boundary |
| LangSmith trace viewer hangs when opening the thread | P51 — checkpointed state is megabytes per step | Same fix as above; additionally confirm reflection_node feeds file summaries (bytes + status) not full contents into the model |
GraphRecursionError inside the Deep Agent loop |
Plan never converges; reflection keeps returning continue or replan |
Cap MAX_REFLECTION_DEPTH=5; when hit, force escalate decision so the user gets a checkpoint and a question |
| Subagent returns full message trace instead of artifact | Subagent's prompt did not constrain output format | Add "Return ONLY the summary" / "Return ONLY code" to the subagent role prompt; validate output shape before writing to state["files"] |
KeyError: 'files' during cleanup |
Initial state shape did not include files: {} |
Initialize DeepAgentState(messages=[], plan={}, files={}, step=0, reflection_depth=0) at graph entry |
| Two subagents write the same filename and one overwrites the other | No dedup / content-addressing | Namespace artifact names by {subagent_role}_{step}_{slug}.md, or use content-addressed storage (SHA-256 of content as key) |
Agent terminates with end decision but no final answer produced |
reflection_node decided end before any subagent produced an artifact |
Guard the end branch with assert any(e["status"] == "done" for e in state["files"].values()) and route to replan otherwise |
See Architecture Blueprint for the full
four-component wiring with copy-paste StateGraph, DeepAgentState, node
definitions, and router function — reproducing LangChain's published blueprint
with the P51/P52 fixes already applied.
def test_subagent_does_not_inherit_planner_persona():
subagent = build_subagent("research-specialist", model)
out = invoke_subagent(subagent,
"What is the capital of France?",
read_files={})
# Planner persona says "decompose the goal"; research persona answers directly.
assert "decompose" not in out.lower()
assert "subtask" not in out.lower()
assert "paris" in out.lower()
import pickle
def test_state_stays_under_500kb_over_50_steps():
thread = {"configurable": {"thread_id": "t-bound"}}
app.invoke({"messages": [HumanMessage("Run a 50-step synthesis task")]}, config=thread)
final = app.get_state(thread).values
assert len(pickle.dumps(final)) < 500_000, "P51 regression: state > 500 KB"
create_react_agent — prompt= parameter
interrupt_after and durable checkpointersdocs/pain-catalog.md (entries P51, P52)langchain-eval-harness — trajectory-level eval for the full Deep Agent loop