LangMem

Tier 2: Persistent (framework-scoped)

Memory SDK for teams already living in LangChain

github.com/langchain-ai/langmem ↗ · ~1,300 GitHub stars · Backed by LangChain ($25M+ Series A)

Our take

LangMem makes sense if and only if you're already committed to LangGraph. The three memory types (semantic, episodic, procedural) are well-designed, and the Prompt Optimizer, which automatically improves agent behavior based on conversation patterns, is something nobody else offers. But the caveats are real: 18-second search latency makes it unusable for anything interactive, the default InMemoryStore silently loses all data on restart (a trap for anyone following the quickstart), and 45% of developers who try LangChain never use it in production. If you're already in the ecosystem, LangMem is the natural choice. If you're evaluating memory tools fresh, start elsewhere.

How it works

SDK with three memory types: semantic (facts), episodic (experiences), procedural (learned behaviors). Built on top of LangGraph's storage layer. Includes a unique Prompt Optimizer that updates system prompts based on conversation patterns.

When to use LangMem

  • + Teams already deep in LangChain/LangGraph who want memory without adding another vendor
  • + Applications where the agent should get better at its job over time (prompt optimization)
  • + Projects where memory needs to be tightly integrated with the agent framework

When to skip it

  • Teams not already on LangChain. Don't adopt LangChain just for LangMem
  • Real-time applications (search latency is p50: 18 seconds, 120x slower than Mem0)
  • Production deployments that need a managed service (SDK-only, bring your own infrastructure)

What it does better than everything else

Prompt optimization from memory. LangMem automatically updates agent system prompts based on conversation patterns, making agents better at their job over time, not just remembering facts. No other tool in this space does this.

MCP support

Not available. No MCP server. LangChain provides MCP adapters (langchain-mcp-adapters) so LangMem agents can consume other MCP servers, but LangMem doesn't expose itself as one.

Why MCP matters: 5 MCP servers every engineering team should run →

Pricing

Free

Open source (MIT). Managed service with free long-term memory launching.

Paid

LangSmith (observability) has separate pricing tiers.

The gotcha nobody mentions

The default InMemoryStore loses all data on restart. The quickstart tutorials use it, which means developers build demos that work perfectly... until the process stops. You must configure PostgreSQL or a vector database for any real use, but this isn't obvious until you lose your data.

"As inflexibility began to show, developers found themselves diving into LangChain internals, but because LangChain intentionally abstracts so many details, it often wasn't easy or possible to write the lower-level code needed."

Frequently asked

Is LangMem the same as LangGraph memory? +
They're complementary. LangGraph handles short-term memory via thread checkpoints. LangMem is a higher-level toolkit built on top of LangGraph that adds memory extraction, long-term storage, and prompt optimization.
Why is the search latency so high? +
LangMem's p50 search is ~18 seconds, 120x slower than Mem0. The overhead comes from LangChain's abstraction layers and the LLM calls involved in memory processing. This makes it impractical for interactive chat but acceptable for background agent workflows.
Should I adopt LangChain just to use LangMem? +
No. LangMem's value proposition only makes sense within the LangGraph ecosystem. If you're choosing a memory tool from scratch, Mem0 or Cognee offer better standalone experiences.

Related reading

Also in this space

LangMem is Tier 2 — persistent (framework-scoped) memory.

It solves real problems for individual agents and users. But if your team needs shared, always-current memory that works across Cursor, Claude, and every other AI tool simultaneously — that's a different architecture entirely.

We're building that with Knowledge Plane. Join the beta →