Introduction
AI agents are revolutionizing software development. Unlike simple prompts, agents can reason, use tools, and handle complex workflows. LangGraph provides a powerful framework for building stateful, production-ready agents. This guide covers everything you need to create sophisticated AI agents.
What Is LangGraph?
The Basic Concept
LangGraph is a framework for building agentic applications with LLM. It extends LangChain with graph-based workflows, enabling:
- Stateful multi-step agents
- Complex branching logic
- Tool use and function calling
- Human-in-the-loop workflows
- Cycles and loops in workflows
Key Terms
- Node: A function that processes state
- Edge: Connections between nodes
- State: Shared data passed through the graph
- Checkpoint: Persistence for state
- Tool: External functions agents can call
Architecture
Graph Structure
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ START โ
โโโโโโโโโโโโโโโโโโโโโโโฌโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ
โผ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ LLM Node โ
โ - Receives current state โ
โ - Makes decisions โ
โ - Returns updated state โ
โโโโโโโโโโโโโโโโโโโโโโโฌโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ
โโโโโโโโโโดโโโโโโโโโ
โ โ
โผ โผ
โโโโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโโ
โ Tool Node 1 โ โ Tool Node 2 โ
โ - Search โ โ - Calculator โ
โโโโโโโโโโฌโโโโโโโโโโ โโโโโโโโโโฌโโโโโโโโโโ
โ โ
โโโโโโโโโโฌโโโโโโโโโโโโโ
โ
โผ
โโโโโโโโโโโโโโโ
โ END โ
โโโโโโโโโโโโโโโ
Building Your First Agent
Setup
from langgraph.graph import StateGraph, END
from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage, AIMessage
# Define the state
class AgentState(TypedDict):
messages: list[BaseMessage]
action: str
result: str
Create the Graph
from typing import TypedDict
# Define state
class AgentState(TypedDict):
messages: list[HumanMessage | AIMessage]
next_action: str
# Initialize LLM
llm = ChatOpenAI(model="gpt-4")
# Node 1: Decide action
def should_continue(state: AgentState) -> str:
"""Decide whether to continue or end"""
messages = state["messages"]
last_message = messages[-1]
if "FINAL" in last_message.content.upper():
return "end"
return "continue"
# Node 2: Call LLM
def call_llm(state: AgentState):
"""Call the LLM with messages"""
response = llm.invoke(state["messages"])
return {"messages": [response]}
# Build the graph
workflow = StateGraph(AgentState)
# Add nodes
workflow.add_node("agent", call_llm)
workflow.add_node("should_continue", should_continue)
# Add edges
workflow.set_entry_point("agent")
workflow.add_edge("agent", "should_continue")
# Conditional routing
workflow.add_conditional_edges(
"should_continue",
lambda x: "agent" if x.get("next_action") == "continue" else END
)
# Compile
app = workflow.compile()
Run the Agent
# Invoke the agent
result = app.invoke({
"messages": [HumanMessage(content="What's the weather in Tokyo?")]
})
print(result["messages"][-1].content)
Tool Use
Define Tools
from langchain_core.tools import tool
import requests
@tool
def get_weather(location: str) -> str:
"""Get the weather for a location"""
api_key = os.getenv("WEATHER_API_KEY")
url = f"https://api.weather.com/v1/forecast?location={location}&key={api_key}"
response = requests.get(url)
return response.json()
@tool
def search_web(query: str) -> str:
"""Search the web for information"""
# Implementation
return f"Results for: {query}"
@tool
def calculate(expression: str) -> str:
"""Calculate a mathematical expression"""
result = eval(expression)
return str(result)
Bind Tools to LLM
from langchain_core.utils.function_calling import convert_to_openai_function
# Convert tools to OpenAI format
tools = [get_weather, search_web, calculate]
llm_with_tools = llm.bind_tools(tools)
# Update node to use tools
def call_llm_with_tools(state: AgentState):
messages = state["messages"]
response = llm_with_tools.invoke(messages)
return {"messages": [response]}
Handle Tool Calls
def handle_tools(state: AgentState):
"""Execute tools based on LLM response"""
last_message = state["messages"][-1]
# Check for tool calls
if hasattr(last_message, "tool_calls") and last_message.tool_calls:
tool_results = []
for tool_call in last_message.tool_calls:
tool_name = tool_call["name"]
tool_args = tool_call["args"]
# Find and call the tool
for t in tools:
if t.name == tool_name:
result = t.invoke(tool_args)
tool_results.append(
ToolMessage(
content=result,
tool_call_id=tool_call["id"]
)
)
return {"messages": tool_results}
return {}
Memory and State
Checkpointing
from langgraph.checkpoint.sqlite import SqliteSaver
# Create checkpoint saver
checkpointer = SqliteSaver.from_conn_string("checkpoints.db")
# Compile with checkpointer
app = workflow.compile(checkpointer=checkpointer)
# Now agent maintains state across conversations
config = {"configurable": {"thread_id": "user-123"}}
# First call
result1 = app.invoke(
{"messages": [HumanMessage(content="My name is John")]},
config
)
# Second call - remembers name!
result2 = app.invoke(
{"messages": [HumanMessage(content="What's my name?")]},
config
)
Memory Types
# Short-term memory (within conversation)
short_memory = ConversationBufferMemory()
# Long-term memory (persisted)
long_memory = LangChainMemory(
session_id="user-123",
storage="postgres"
)
# Summary memory
summary_memory = ConversationSummaryMemory(
llm=llm,
summary_threshold=10 # messages
)
Complex Workflows
Multi-Agent Systems
# Research Agent
research_agent = workflow.compile()
research_agent.name = "researcher"
# Writer Agent
writer_agent = workflow.compile()
writer_agent.name = "writer"
# Supervisor
def supervisor_node(state: AgentState):
"""Supervise the workflow"""
task = state["current_task"]
if task == "research":
return "research_agent"
elif task == "write":
return "writer_agent"
else:
return END
# Build supervisor workflow
supervisor_graph = StateGraph(AgentState)
supervisor_graph.add_node("supervisor", supervisor_node)
supervisor_graph.add_node("research_agent", research_agent)
supervisor_graph.add_node("writer_agent", writer_agent)
Human-in-the-Loop
from langgraph.prebuilt import ToolNode
def human_approval(state: AgentState) -> AgentState:
"""Wait for human approval"""
# This would integrate with a UI
approval = input("Approve this action? (yes/no): ")
if approval.lower() == "yes":
return {"approved": True}
return {"approved": False}
# Add to workflow
workflow.add_node("human_approval", human_approval)
# Route to human for critical actions
workflow.add_conditional_edges(
"agent",
lambda state: "human_approval" if state["requires_approval"] else "continue"
)
Best Practices
1. Handle Errors Gracefully
def safe_tool_node(state: AgentState):
"""Wrap tool execution with error handling"""
try:
return handle_tools(state)
except Exception as e:
return {
"messages": [AIMessage(
content=f"I encountered an error: {str(e)}. Let me try again."
)]
}
2. Validate Tool Inputs
from pydantic import BaseModel, ValidationError
class WeatherInput(BaseModel):
location: str
@field_validator('location')
@classmethod
def validate_location(cls, v):
if not v or len(v) < 2:
raise ValueError("Location must be at least 2 characters")
return v
@tool(args_schema=WeatherInput)
def get_weather(location: str) -> str:
"""Get weather with validation"""
# Now has automatic validation
return calls_weather_api(location)
3. Use Streaming
# Stream agent responses
for event in app.stream(
{"messages": [HumanMessage(content="Tell me a story")]}
):
for node, values in event.items():
if "messages" in values:
print(values["messages"][-1].content, end="", flush=True)
Production Deployment
API Server
from fastapi import FastAPI
from pydantic import BaseModel
app = FastAPI()
class MessageInput(BaseModel):
message: str
thread_id: str = "default"
@app.post("/agent")
async def agent_endpoint(input: MessageInput):
config = {"configurable": {"thread_id": input.thread_id}}
result = app.invoke(
{"messages": [HumanMessage(content=input.message)]},
config
)
return {"response": result["messages"][-1].content}
Monitoring
# Track agent performance
from langsmith import Client
# Log runs to LangSmith
app = workflow.compile(
checkpointer=checkpointer,
debug=True # Enable LangSmith tracing
)
# View runs at https://smith.langchain.com
External Resources
Documentation
Examples
Key Takeaways
- LangGraph builds stateful, multi-step agents
- Nodes process state, edges connect nodes
- Tools extend agent capabilities
- Checkpoints enable memory across sessions
- Best practices: Error handling, input validation, streaming
- Production: FastAPI deployment, monitoring with LangSmith
Comments