Introduction
The field of AI-driven applications is advancing rapidly, with LangGraph emerging as a cutting-edge framework for developing intelligent multi-agent systems. Created by the team behind LangChain, LangGraph is an open-source library designed to enable developers to build stateful, structured workflows for Large Language Models (LLMs).
LangGraph distinguishes itself from traditional actor-based models by introducing a shared state mechanism. This approach enhances collaboration and control over agent interactions, moving beyond isolated message queues.
This blog will delve into LangGraph's core features, its differences from traditional actor models, and practical applications. By the end, you will understand how to leverage LangGraph to build robust AI-powered applications.
LangGraph vs. Traditional Actor Models: What’s Different?
Most actor-based models operate using an isolated inbox message queue where each actor:
- Maintains its own private state.
- Communicates asynchronously via message passing.
- Works independently without direct access to other actors' states.
- While this approach avoids threading complexity, it also creates communication silos between agents.
NOTE : Please refer AKKA or AKKA.NET or Microsoft Orleans framework for Actor based programming.
How LangGraph is Different
LangGraph breaks away from the traditional actor model by introducing a shared state mechanism that allows agents to:
- Collaborate dynamically by exchanging real-time updates.
- Track and retain context, making it ideal for conversational AI and operational workflows.
- Ensure transparency in decision-making and execution.
This means that instead of passing messages blindly, agents in a LangGraph-powered system can access a structured execution flow, making it more adaptable for multi-step reasoning, AI-driven decision-making, and real-time collaboration.
Key Features of LangGraph
1. Stateful Memory Management (Short-Term & Long-Term Memory)
LangGraph provides built-in memory management, allowing applications to retain state across interactions:
Short-Term Memory – Helps chatbots remember past interactions within a session, making conversations feel more natural.
Long-Term Memory – Integrates with external databases for persistent knowledge retention, useful for applications like customer support and recommendation engines.
2. Human-in-the-Loop (HITL) Support
One of LangGraph’s standout features is its seamless integration of human oversight, making it perfect for applications that require precision and validation.
🔹 Pause & Resume Workflows – Human reviewers can intervene at critical decision points.
🔹 Approve or Modify AI Decisions – Useful for legal, healthcare, and finance applications where errors can be costly.
🔹 Enhance Trust & Accountability – Keeps humans in control when needed.
3. Real-Time Streaming Support
LangGraph supports real-time data streaming, making it perfect for:
- - Conversational AI & chatbots – Ensures instant responses.
- - Live data processing – Useful for event-driven applications. Continuous updates – Enables real-time analytics.
4. Debugging & Observability
Debugging complex AI systems can be a nightmare. LangGraph simplifies the process by:
- Providing full traceability of how data moves through the workflow.
- Allowing breakpoints to inspect state changes.
- Integrating with Python debugging tools for a smooth development experience.
5. LangSmith Integration for Monitoring
LangGraph works seamlessly with LangSmith, providing advanced logging and performance tracking to:
- Monitor errors and optimize workflows.
- Analyze model interactions in real time.
- Improve system efficiency through detailed analytics.
6. Asynchronous Execution for Scalability
LangGraph supports async processing, making it ideal for:
- Non-blocking execution of long-running tasks.
- Concurrent user interactions.
- Handling high-traffic AI applications.
7. Double Texting Prevention
[Note : only Supported LangGraph Cloud platform not in local ]
Ever typed two messages before the chatbot responded? LangGraph prevents overlapping inputs to maintain consistency, making it great for:
- Fast-typing users in chatbots.
- Collaborative multi-user applications.
Core Components of LangGraph
To truly understand LangGraph, let’s break down its core building blocks:
1. Nodes
What are Nodes?
The fundamental processing units in LangGraph.
Implemented as Python functions. Receive an input state, process it, and return an updated state.
Example Use Cases:
Making API calls.
Transforming data.
Running business logic.
2. Edges
What are Edges?
Define logical connections between nodes.
Determine execution order based on conditional logic.
Example Use Cases:
Directing user queries to different agents.
Handling different conversation paths in a chatbot.
3. State (Shared Context)
What is State?
A Python dictionary or class that passes data between nodes.
Can store session history, user preferences, or operational data.
Supports short-term and long-term persistence.
Storing conversation history in a chatbot.
Tracking workflow progress in an automated process.
Simple Chatbot using LangGraph
from langgraph.graph import StateGraph, START, END, add_messages
from dotenv import load_dotenv
from langchain_openai.chat_models import AzureChatOpenAI
from typing_extensions import TypedDict, Annotated
from langchain.schema import AIMessage,HumanMessage
load_dotenv()
llm = AzureChatOpenAI(
azure_deployment="gpt-4o-mini",
api_version="2024-08-01-preview",
temperature=0,
max_tokens=None,
timeout=None,
max_retries=2
)
class Request(TypedDict):
messages:Annotated[list,add_messages]
def chat(state:Request):
return {"messages": llm.invoke(state["messages"])}
graph =StateGraph(Request)
graph.add_node("sreeni_chatbot",chat)
graph.add_edge(START,"sreeni_chatbot")
graph.add_edge("sreeni_chatbot",END)
workflow = graph.compile()
image = workflow.get_graph().draw_mermaid_png()
with open("sreeni_chatbot.png","wb") as file:
file.write(image)
while True:
query =input("Enter your query? ")
if query.lower() in ["quit","exit"]:
print("bye bye")
exit(0)
response =workflow.invoke({"messages":[{"role":"user","content":query}]})
#print(response)
ai_response = next((msg.content for msg in response["messages"] if isinstance(msg,AIMessage) ))
print(ai_response)
So far, so good. But then, you ask:
"Do you remember my name?"
And the chatbot replies:
"I don’t have the ability to remember personal information or past interactions. Each session is independent, so I don’t retain any details once our conversation ends. How can I assist you today?"
At this point, it becomes clear that the chatbot doesn’t have memory,it treats each query as a fresh interaction. This is fine for simple Q&A but isn’t ideal for building conversational AI that needs to retain context.
To make a chatbot more intelligent and context-aware, we need to introduce memory. In this case, we'll implement short-term memory using an in-memory approach. This will allow the chatbot to remember details during a session, but once the application restarts, all memory will be erased. Let’s explore how to integrate this feature and make chatbot interactions feel more natural!
Adding checkpoint memory to Chatbot
To add memory to chatbot we are going to import the langgraph checkpoint memory and import MemorySaver class
from langgraph.checkpoint.memory import MemorySaver Then create instance of this class and add to
Compile the workflow with memory checkpointing
workflow = graph.compile(checkpointer=memory)
from typing import Annotated
from langgraph.graph import StateGraph, START, END, add_messages
from dotenv import load_dotenv
from langchain_openai.chat_models import AzureChatOpenAI
from mypy_extensions import TypedDict
from langgraph.checkpoint.memory import MemorySaver
# Load environment variables (e.g., API keys)
load_dotenv()
# Initialize Azure OpenAI chat model with specific parameters
llm = AzureChatOpenAI(
azure_deployment="gpt-4o-mini", # Specify the deployment name
api_version="2024-08-01-preview", # API version to use
temperature=0, # Controls randomness (0 makes it deterministic)
max_tokens=None, # No restriction on token count
timeout=None, # No timeout set
max_retries=2 # Retry up to 2 times in case of failure
)
# Define a data structure to hold conversation messages
class Request(TypedDict):
msgs: Annotated[list, add_messages] # List to store chat messages
# Create a LangGraph state graph with Request as state
graph = StateGraph(Request)
# Define a function to handle chat messages with context
def chat_with_context(state: Request):
msg = state["msgs"] # Retrieve existing messages
response = llm.invoke(msg) # Call LLM to generate a response
state["msgs"] = response.content # Update state with the response
return {"msgs": state["msgs"]} # Return updated state
# Add the function as a node in the graph
graph.add_node("sreeni_chatbot", chat_with_context)
# Define the graph flow: Start → Chat → End
graph.add_edge(START, "sreeni_chatbot")
graph.add_edge("sreeni_chatbot", END)
# Initialize in-memory state persistence (short-term memory)
memory = MemorySaver()
# Compile the workflow with memory checkpointing
workflow = graph.compile(checkpointer=memory)
# Generate a visual representation of the workflow graph
image = workflow.get_graph().draw_mermaid_png()
# Save the generated graph image
with open("sreeni_chatbot.png", "wb") as file:
file.write(image)
# Configuration for tracking user sessions
config = {"configurable": {"thread_id": "1"}}
# Continuous chat loop
while True:
query = input("Enter your query or question? ") # Get user input
if query.lower() in ["quit", "exit"]: # Exit condition
print("bye bye")
exit(0)
# Invoke the workflow with user input and maintain session state
response = workflow.invoke({"msgs": [{"role": "user", "content": query}]}, config=config)
# Print the chatbot's response
print(response["msgs"][-1].content)
In this blog, we explored how to create simple chatbot using LangGraph and add short-term memory to our chatbot using LangGraph's MemorySaver. This allows the chatbot to retain context within a session, enabling more natural and context-aware interactions. However, since it's an in-memory approach, the memory resets when the application restarts.
In Part II, we will take our chatbot to the next level by integrating tools for extended functionality and introducing Human-in-the-Loop (HIL) to allow user intervention in responses. Stay tuned! 🚀
Thanks
Sreeni Ramadorai
Top comments (2)
Excellent article. Still need a better understanding
Thank you.