Most small LLM applications don't need a state graph framework. I know this because I used LangGraph in 8 out of 10 AI projects I built—and eventually replaced it in most of them.
I want to be clear upfront: LangGraph is well-built software. The team behind it is sharp, the abstractions are thoughtful, and for genuinely complex multi-agent workflows, it earns its place. This isn't a takedown. It's a case for matching tools to problem size.
What Pulled Me In
The graph metaphor is compelling. Nodes for LLM calls, edges for transitions, a typed state object flowing through everything. When I first saw LangGraph's architecture diagram, it felt like someone had finally imposed order on the chaos of LLM non-determinism. As someone with a traditional software engineering background, that was reassuring.
My first project was a document processing pipeline—upload PDFs, extract information, run multiple analysis passes, generate reports. Real branching logic, real dependencies between steps, real error handling concerns. LangGraph handled it well. I could visualize the flow, trace state through the graph, and reason about the whole system by looking at the diagram.
For that use case, it was the right tool.
Where It Turned
The friction didn't come from a single failure. It accumulated gradually as overhead.
The first sign was when I added multi-document comparison to that pipeline. My rigidly typed state object—clean and purposeful in v1—suddenly needed to handle lists of documents, keyed dictionaries of extracted data, and conditional type guards everywhere. Every node had to branch on "one document vs. many." The state schema became a maintenance burden.
Then I started a new project: a simple chatbot. Answer questions over a knowledge base, remember conversation history, escalate to humans when confidence is low. I reached for LangGraph out of habit and built a state graph with nodes for retrieval, context assembly, response generation, confidence scoring, and escalation—plus conditional edges to route between them.
When I stepped back and looked at what I'd built, the mismatch was clear.
The code worked. But I had wrapped a linear pipeline with one branch in a state machine framework that required me to maintain type definitions, node signatures, and graph topology every time I wanted to tweak a prompt or adjust a threshold. The overhead of the framework was exceeding the complexity of the actual problem.
The issue was that I'd been confusing "structured" with "complex." These applications weren't complex—they were sequential operations dressed up in graph because the framework made them feel more rigorous.
The Question Worth Asking First
Before reaching for LangGraph on any given project, ask one question: do I have a genuinely complex workflow with many conditional paths and coordination between agents, or am I building a pipeline?
A chatbot is a pipeline. Retrieve context, generate response, check confidence. A document processor is a pipeline. Extract, transform, output. A summarization tool is a pipeline. Feed in text, get summary. These are sequences of operations with maybe a branch or two. They don't need a state graph—they need well-structured code with clear separation of concerns.
If the answer is "pipeline," you don't need LangGraph. You need functions, interfaces, and dependency injection.
What I Replaced It With
I switched to the Vercel AI SDK with a hexagonal architecture—ports and adapters. The core idea is simple.
LLM providers become adapters behind a shared interface:
// packages/adapters/src/llm/types.ts
import type { EmbeddingModel, LanguageModel, Tool } from "ai";
export interface LLMProvider {
largeModel: LanguageModel;
mediumModel: LanguageModel;
smallModel: LanguageModel;
embeddingModel: EmbeddingModel;
webSearchTool?: { name: string; instance: Tool };
}
OpenAI, Gemini, Ollama—they all implement this. Domain code never knows which one is running.
Agents take a model through constructor injection:
// packages/core/src/agents/SummaryAgent/agent.ts
import { generateText, type LanguageModel } from "ai";
export class SummaryAgent {
constructor(private readonly model: LanguageModel) {}
async summarize(params: { ... }): Promise<string> {
const { text } = await generateText({
model: this.model,
prompt: "...",
});
return text;
}
}
No graph, no state schema, no framework runtime. A class that takes a model and does its job. Swap Gemini for Ollama by changing what you inject. The agent doesn't change.
Memory and embeddings follow the same pattern:
// packages/adapters/src/memory/createFirestoreMemory.ts
import { type EmbeddingModel, embed } from "ai";
export function createFirestoreMemory(...) {
const embedFn = (text: string) =>
embed({ model: embeddingModel, value: text }).then((r) => r.embedding);
return new FirestoreAgentMemory(db, embedFn, collectionName);
}
The domain layer gets a memory interface. It doesn't know Firestore is involved or which embedding model is running.
What This Actually Changed
The difference isn't code length—it's that complexity stays proportional to the problem.
Testing became straightforward. Pass a mock model into an agent constructor and test the logic. No graph runtime to simulate, no state transitions to set up.
Provider swaps are config changes. Switching from OpenAI to Gemini meant updating one file in packages/adapters. Zero changes to agent code.
Adding features is low friction. A new step in a pipeline means writing a function and calling it. No state schema updates, no node signature changes, no topology verification.
Onboarding is fast. The codebase uses patterns every TypeScript developer already knows—constructors, interfaces, composition. No library-specific mental model required.
When LangGraph Is Still the Right Call
There are cases where LangGraph's value is clear:
- Multi-agent systems with complex coordination. When agents need to share state and route to each other based on dynamic conditions, a graph model is a natural fit.
- Workflows with heavy human-in-the-loop patterns. LangGraph's checkpointing and interrupt/resume capabilities are purpose-built for this.
- Complex decision trees with many conditional branches. If your flow diagram looks like a subway map, a graph framework earns its keep.
If you're building one of these, LangGraph is likely the right choice.
But if you're building a chatbot, a RAG pipeline, a summarization tool, or a document processor—verify that you're solving your actual problem, not the framework's problem.
The Heuristic
Start with plain functions and dependency injection. Add a framework when the complexity of your coordination logic genuinely exceeds what straightforward code can express.
For most small LLM applications, you'll never reach that point.
This article was originally published by DEV Community and written by DeadLocker.
Read original article on DEV Community