Skip to main content
Important The hybrid option requires an Enterprise plan.
The hybrid model splits LangSmith infrastructure between LangChain’s cloud and yours:
  • Control plane (LangSmith UI, APIs, and orchestration) runs in LangChain’s cloud, managed by LangChain.
  • Data plane (your and agent workloads) runs in your cloud, managed by you.
This combines the convenience of a managed interface with the flexibility of running workloads in your own environment.
Learn more about the control plane, data plane, and LangGraph Server architecture concepts.
ComponentResponsibilitiesWhere it runsWho manages it
  • UI for creating deployments and revisions
  • APIs for managing deployments
  • Observability data storage
LangChain’s cloudLangChain
  • Listener to sync with control plane
  • LangGraph Servers (your agents)
  • Backing services (Postgres, Redis, etc.)
Your cloudYou
When hosting LangSmith in a hybrid model, you authenticate with a LangSmith API key.

Workflow

  1. Use the langgraph-cli or Studio to test your graph locally.
  2. Build a Docker image using the langgraph build command.
  3. Deploy your LangGraph Server from the control plane UI.
Supported Compute Platforms: Kubernetes.
For setup, refer to the Hybrid setup guide.

Architecture

Hybrid deployment: LangChain-hosted control plane (LangSmith UI/APIs) manages deployments. Your cloud runs a listener, LangGraph Server instances, and backing stores (Postgres/Redis) on Kubernetes.

Compute Platforms

  • Kubernetes: Hybrid supports running the data plane on any Kubernetes cluster.
For setup in Kubernetes, refer to the Hybrid setup guide

Egress to LangSmith and the control plane

In the hybrid deployment model, your self-hosted data plane will send network requests to the control plane to poll for changes that need to be implemented in the data plane. Traces from data plane deployments also get sent to the LangSmith instance integrated with the control plane. This traffic to the control plane is encrypted, over HTTPS. The data plane authenticates with the control plane with a LangSmith API key. In order to enable this egress, you may need to update internal firewall rules or cloud resources (such as Security Groups) to allow certain IP addresses.
AWS/Azure PrivateLink or GCP Private Service Connect is currently not supported. This traffic will go over the internet.

Listeners

In the hybrid option, one or more “listener” applications can run depending on how your LangSmith workspaces and Kubernetes clusters are organized.

Kubernetes cluster organization

  • One or more listeners can run in a Kubernetes cluster.
  • A listener can deploy into one or more namespaces in that cluster.
  • Cluster owners are responsible for planning listener layout and LangGraph Server deployments.

LangSmith workspace organization

  • A workspace can be associated with one or more listeners.
  • A workspace can only deploy to Kubernetes clusters where all of its listeners are deployed.

Use Cases

Here are some common listener configurations (not strict requirements):

Each LangSmith workspace → separate Kubernetes cluster

  • Cluster alpha runs workspace A
  • Cluster beta runs workspace B

Separate clusters, with shared “dev” cluster

  • Cluster alpha runs workspace A
  • Cluster beta runs workspace B
  • Cluster dev runs workspaces A and B
  • Both workspaces have two listeners; cluster dev has two listener deployments

One cluster, one namespace per workspace

  • Cluster alpha, namespace 1 runs workspace A
  • Cluster alpha, namespace 2 runs workspace B

One cluster, single namespace for multiple workspaces

  • Cluster alpha runs workspace A
  • Cluster alpha runs workspace B

Connect these docs programmatically to Claude, VSCode, and more via MCP for real-time answers.