Skip to main content

Overview

The hard part of building agents (or any LLM application) is making them reliable enough. While they may work for a prototype, they often fail in real-world use cases.

Why do agents fail?

When agents fail, it’s usually because the LLM call inside the agent took the wrong action / didn’t do what we expected. LLMs fail for one of two reasons:
  1. The underlying LLM is not capable enough
  2. The “right” context was not passed to the LLM
More often than not - it’s actually the second reason that causes agents to not be reliable. Context engineering is providing the right information and tools in the right format so the LLM can accomplish a task. This is the number one job of AI Engineers. This lack of “right” context is the number one blocker for more reliable agents, and LangChain’s agent abstractions are uniquely designed to facilitate context engineering.
New to context engineering? Start with the conceptual overview to understand the different types of context and when to use them.

The agent loop

A typical agent loop consists of two main steps:
  1. Model call - calls the LLM with a prompt and available tools, returns either a response or a request to execute tools
  2. Tool execution - executes the tools that the LLM requested, returns tool results
Core agent loop diagram
This loop continues until the LLM decides to finish.

What you can control

To build reliable agents, you need to control what happens at each step of the agent loop, as well as what happens between steps.
Context TypeWhat You ControlTransient or Persistent
Model ContextWhat goes into model calls (instructions, message history, tools, response format)Transient
Tool ContextWhat tools can access and produce (reads/writes to state, store, runtime context)Persistent
Life-cycle ContextWhat happens between model and tool calls (summarization, guardrails, logging, etc.)Persistent

Transient context

What the LLM sees for a single call. You can modify messages, tools, or prompts without changing what’s saved in state.

Persistent context

What gets saved in state across turns. Life-cycle hooks and tool writes modify this permanently.

Data sources

Throughout this process, your agent accesses (reads / writes) different sources of data:
Data SourceAlso Known AsScopeExamples
Runtime ContextStatic configurationConversation-scopedUser ID, API keys, database connections, permissions, environment settings
StateShort-term memoryConversation-scopedCurrent messages, uploaded files, authentication status, tool results
StoreLong-term memoryCross-conversationUser preferences, extracted insights, memories, historical data

How it works

LangChain middleware is the mechanism under the hood that makes context engineering practical for developers using LangChain. Middleware allows you to hook into any step in the agent lifecycle and:
  • Update context
  • Jump to a different step in the agent lifecycle
Throughout this guide, you’ll see frequent use of the middleware API as a means to the context engineering end.

Model Context

Control what goes into each model call - instructions, available tools, which model to use, and output format. These decisions directly impact reliability and cost. All of these types of model context can draw from state (short-term memory), store (long-term memory), or runtime context (static configuration).

System Prompt

The system prompt sets the LLM’s behavior and capabilities. Different users, contexts, or conversation stages need different instructions. Successful agents draw on memories, preferences, and configuration to provide the right instructions for the current state of the conversation.
  • State
  • Store
  • Runtime Context
Access message count or conversation context from state:
import { createAgent } from "langchain";

const agent = createAgent({
  model: "openai:gpt-4o",
  tools: [...],
  middleware: [
    dynamicSystemPromptMiddleware((state) => {
      // Read from State: check conversation length
      const messageCount = state.messages.length;

      let base = "You are a helpful assistant.";

      if (messageCount > 10) {
        base += "\nThis is a long conversation - be extra concise.";
      }

      return base;
    }),
  ],
});

Messages

Messages make up the prompt that is sent to the LLM. It’s critical to manage the content of messages to ensure that the LLM has the right information to respond well.
  • State
  • Store
  • Runtime Context
Inject uploaded file context from State when relevant to current query:
import { createMiddleware } from "langchain";

const injectFileContext = createMiddleware({
  name: "InjectFileContext",
  wrapModelCall: (request, handler) => {
    // request.state is a shortcut for request.state.messages
    const uploadedFiles = request.state.uploadedFiles || [];  

    if (uploadedFiles.length > 0) {
      // Build context about available files
      const fileDescriptions = uploadedFiles.map(file =>
        `- ${file.name} (${file.type}): ${file.summary}`
      );

      const fileContext = `Files you have access to in this conversation:
${fileDescriptions.join("\n")}

Reference these files when answering questions.`;

      // Inject file context before recent messages
      const messages = [  
        ...request.messages  // Rest of conversation
        { role: "user", content: fileContext }
      ];
      request = request.override({ messages });  
    }

    return handler(request);
  },
});

const agent = createAgent({
  model: "openai:gpt-4o",
  tools: [...],
  middleware: [injectFileContext],
});
Transient vs Persistent Message Updates:The examples above use wrap_model_call to make transient updates - modifying what messages are sent to the model for a single call without changing what’s saved in state.For persistent updates that modify state (like the summarization example in Life-cycle Context), use life-cycle hooks like before_model or after_model to permanently update the conversation history. See the middleware documentation for more details.

Tools

Tools let the model interact with databases, APIs, and external systems. How you define and select tools directly impacts whether the model can complete tasks effectively.

Defining tools

Each tool needs a clear name, description, argument names, and argument descriptions. These aren’t just metadata—they guide the model’s reasoning about when and how to use the tool.
import { tool } from "@langchain/core/tools";
import { z } from "zod";

const searchOrders = tool(
  async ({ userId, status, limit = 10 }) => {
    // Implementation here
  },
  {
    name: "search_orders",
    description: `Search for user orders by status.

    Use this when the user asks about order history or wants to check
    order status. Always filter by the provided status.`,
    schema: z.object({
      userId: z.string().describe("Unique identifier for the user"),
      status: z.enum(["pending", "shipped", "delivered"]).describe("Order status to filter by"),
      limit: z.number().default(10).describe("Maximum number of results to return"),
    }),
  }
);

Selecting tools

Not every tool is appropriate for every situation. Too many tools may overwhelm the model (overload context) and increase errors; too few limit capabilities. Dynamic tool selection adapts the available toolset based on authentication state, user permissions, feature flags, or conversation stage.
  • State
  • Store
  • Runtime Context
Enable advanced tools only after certain conversation milestones:
import { createMiddleware } from "langchain";

const stateBasedTools = createMiddleware({
  name: "StateBasedTools",
  wrapModelCall: (request, handler) => {
    // Read from State: check authentication and conversation length
    const state = request.state;  
    const isAuthenticated = state.authenticated || false;  
    const messageCount = state.messages.length;

    let filteredTools = request.tools;

    // Only enable sensitive tools after authentication
    if (!isAuthenticated) {
      filteredTools = request.tools.filter(t => t.name.startsWith("public_"));  
    } else if (messageCount < 5) {
      filteredTools = request.tools.filter(t => t.name !== "advanced_search");  
    }

    return handler({ ...request, tools: filteredTools });  
  },
});
See Dynamically selecting tools for more examples.

Model

Different models have different strengths, costs, and context windows. Select the right model for the task at hand, which might change during an agent run.
  • State
  • Store
  • Runtime Context
Use different models based on conversation length from State:
import { createMiddleware, initChatModel } from "langchain";

// Initialize models once outside the middleware
const largeModel = initChatModel("anthropic:claude-sonnet-4-5");
const standardModel = initChatModel("openai:gpt-4o");
const efficientModel = initChatModel("openai:gpt-4o-mini");

const stateBasedModel = createMiddleware({
  name: "StateBasedModel",
  wrapModelCall: (request, handler) => {
    // request.messages is a shortcut for request.state.messages
    const messageCount = request.messages.length;  
    let model;

    if (messageCount > 20) {
      model = largeModel;
    } else if (messageCount > 10) {
      model = standardModel;
    } else {
      model = efficientModel;
    }

    return handler({ ...request, model });  
  },
});
See Dynamic model for more examples.

Response Format

Structured output transforms unstructured text into validated, structured data. When extracting specific fields or returning data for downstream systems, free-form text isn’t sufficient. How it works: When you provide a schema as the response format, the model’s final response is guaranteed to conform to that schema. The agent runs the model / tool calling loop until the model is done calling tools, then the final response is coerced into the provided format.

Defining formats

Schema definitions guide the model. Field names, types, and descriptions specify exactly what format the output should adhere to.
import { z } from "zod";

const customerSupportTicket = z.object({
  category: z.enum(["billing", "technical", "account", "product"]).describe(
    "Issue category"
  ),
  priority: z.enum(["low", "medium", "high", "critical"]).describe(
    "Urgency level"
  ),
  summary: z.string().describe(
    "One-sentence summary of the customer's issue"
  ),
  customerSentiment: z.enum(["frustrated", "neutral", "satisfied"]).describe(
    "Customer's emotional tone"
  ),
}).describe("Structured ticket information extracted from customer message");

Selecting formats

Dynamic response format selection adapts schemas based on user preferences, conversation stage, or role—returning simple formats early and detailed formats as complexity increases.
  • State
  • Store
  • Runtime Context
Configure structured output based on conversation state:
import { createMiddleware } from "langchain";
import { z } from "zod";

const simpleResponse = z.object({
  answer: z.string().describe("A brief answer"),
});

const detailedResponse = z.object({
  answer: z.string().describe("A detailed answer"),
  reasoning: z.string().describe("Explanation of reasoning"),
  confidence: z.number().describe("Confidence score 0-1"),
});

const stateBasedOutput = createMiddleware({
  name: "StateBasedOutput",
  wrapModelCall: (request, handler) => {
    // request.state is a shortcut for request.state.messages
    const messageCount = request.messages.length;  

    if (messageCount < 3) {
      // Early conversation - use simple format
      responseFormat = simpleResponse; 
    } else {
      // Established conversation - use detailed format
      responseFormat = detailedResponse; 
    }

    return handler({ ...request, responseFormat });
  },
});

Tool Context

Tools are special in that they both read and write context. In the most basic case, when a tool executes, it receives the LLM’s request parameters and returns a tool message back. The tool does its work and produces a result. Tools can also fetch important information for the model that allows it to perform and complete tasks.

Reads

Most real-world tools need more than just the LLM’s parameters. They need user IDs for database queries, API keys for external services, or current session state to make decisions. Tools read from state, store, and runtime context to access this information.
  • State
  • Store
  • Runtime Context
Read from State to check current session information:
import * as z from "zod";
import { tool } from "@langchain/core/tools";
import { createAgent } from "langchain";

const checkAuthentication = tool(
  async (_, { runtime }) => {
    // Read from State: check current auth status
    const currentState = runtime.state;
    const isAuthenticated = currentState.authenticated || false;

    if (isAuthenticated) {
      return "User is authenticated";
    } else {
      return "User is not authenticated";
    }
  },
  {
    name: "check_authentication",
    description: "Check if user is authenticated",
    schema: z.object({}),
  }
);

Writes

Tool results can be used to help an agent complete a given task. Tools can both return results directly to the model and update the memory of the agent to make important context available to future steps.
  • State
  • Store
Write to State to track session-specific information using Command:
import * as z from "zod";
import { tool } from "@langchain/core/tools";
import { createAgent } from "langchain";
import { Command } from "@langchain/langgraph";

const authenticateUser = tool(
  async ({ password }, { runtime }) => {
    // Perform authentication
    if (password === "correct") {
      // Write to State: mark as authenticated using Command
      return new Command({
        update: { authenticated: true },
      });
    } else {
      return new Command({ update: { authenticated: false } });
    }
  },
  {
    name: "authenticate_user",
    description: "Authenticate user and update State",
    schema: z.object({
      password: z.string(),
    }),
  }
);
See Tools for comprehensive examples of accessing state, store, and runtime context in tools.

Life-cycle Context

Control what happens between the core agent steps - intercepting data flow to implement cross-cutting concerns like summarization, guardrails, and logging. As you’ve seen in Model Context and Tool Context, middleware is the mechanism that makes context engineering practical. Middleware allows you to hook into any step in the agent lifecycle and either:
  1. Update context - Modify state and store to persist changes, update conversation history, or save insights
  2. Jump in the lifecycle - Move to different steps in the agent cycle based on context (e.g., skip tool execution if a condition is met, repeat model call with modified context)
Middleware hooks in the agent loop

Example: Summarization

One of the most common life-cycle patterns is automatically condensing conversation history when it gets too long. Unlike the transient message trimming shown in Model Context, summarization persistently updates state - permanently replacing old messages with a summary that’s saved for all future turns. LangChain offers built-in middleware for this:
import { createAgent, summarizationMiddleware } from "langchain";

const agent = createAgent({
  model: "openai:gpt-4o",
  tools: [...],
  middleware: [
    summarizationMiddleware({
      model: "openai:gpt-4o-mini",
      maxTokensBeforeSummary: 4000, // Trigger summarization at 4000 tokens
      messagesToKeep: 20, // Keep last 20 messages after summary
    }),
  ],
});
When the conversation exceeds the token limit, SummarizationMiddleware automatically:
  1. Summarizes older messages using a separate LLM call
  2. Replaces them with a summary message in State (permanently)
  3. Keeps recent messages intact for context
The summarized conversation history is permanently updated - future turns will see the summary instead of the original messages.
For a complete list of built-in middleware, available hooks, and how to create custom middleware, see the Middleware documentation.

Best practices

  1. Start simple - Begin with static prompts and tools, add dynamics only when needed
  2. Test incrementally - Add one context engineering feature at a time
  3. Monitor performance - Track model calls, token usage, and latency
  4. Use built-in middleware - Leverage SummarizationMiddleware, LLMToolSelectorMiddleware, etc.
  5. Document your context strategy - Make it clear what context is being passed and why
  6. Understand transient vs persistent: Model context changes are transient (per-call), while life-cycle context changes persist to state

Connect these docs programmatically to Claude, VSCode, and more via MCP for real-time answers.