DEV Community

Cover image for How to make an AI agent with OpenAI, Langgraph, and MongoDB 💡✨
zpillsbury
zpillsbury

Posted on

How to make an AI agent with OpenAI, Langgraph, and MongoDB 💡✨

Large language models (LLMs) such as ChatGPT, Gemini, and Claude have become an essential tool for businesses. Every company is eager to develop its own customized AI tailored to its specific needs or customers. In this exploration, we will focus on creating a personalized assistant that has function calling, stores messages in a database for multi-session conversations, and enables your assistant to perform web searches and summarize its findings. To make it better organized and to allow the addition of new features in the future, we will utilize Langchain, Langgraph and LangSmith, which simplifies the process and enhance functionality. Langchain simplifies the process of streaming, tool calling and lets you use multiple different llms. Langgraph is an organization tool that helps decide which tool needs to be used and lets the agent choose its own path. Langsmith is an observation tool, this will allow you to see the whole process from when you ask the llm a question to when you get your final answer.

⭐️ The complete source code referenced in this guide is available on GitHub https://github.com/zpillsbury/ai-agent

Setup

Add environment variables to .env, you will need your openai_key, tavily_key, and mongo_uri.

📝 .env

OPENAPI_KEY=OPENAI_KEY=sk-proj-XXXXXX
TAVILY_API_KEY=tvly-XXXXXXXXXXXXXXXXXXXXXXXX
MONGO_URI=mongodb+srvXXXXXXXXXXXXXXXXXXXXXXXXXXX

LANGCHAIN_TRACING_V2=true
LANGCHAIN_ENDPOINT=https://api.smith.langchain.com
LANGCHAIN_API_KEY=lsv2_pt_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx_xxxxxxxxxx
LANGCHAIN_PROJECT=project-name
Enter fullscreen mode Exit fullscreen mode

Add langsmith, OpenAI key, Tavily key, and Mongo uri to settings.

📝 app/utilities/settings.py

from pydantic import SecretStr
from pydantic_settings import BaseSettings, SettingsConfigDict

class Settings(BaseSettings):
    openai_key: SecretStr
    tavily_api_key: str
    mongo_uri: SecretStr


    # LangSmith
    langchain_tracing_v2: bool = True
    langchain_endpoint: str = "https://api.smith.langchain.com"
    langchain_project: str = "ai-agent"
    langchain_api_key: str

    model_config = SettingsConfigDict(env_file=".env", extra="ignore")


settings = Settings()
Enter fullscreen mode Exit fullscreen mode

Setup the OpenAI client using our secret key from the settings.

📝 app/main.py

from langchain_openai import ChatOpenAI

from .utilities.settings import settings

llm = ChatOpenAI(
    openai_api_key=settings.openai_key,
    model_name="gpt-4o-mini",
    max_retries=2,
)
Enter fullscreen mode Exit fullscreen mode

Langgraph

Set up the state so data can be passed and used between nodes. We are adding messages to our chatbot's state using a LLM to generate messages. The add_messages function from langgraph makes it so any messages returned from a node will be appended to the existing messages state.

📝 app/main.py

from typing import Annotated, TypedDict

from langchain_core.messages import BaseMessage
from langgraph.graph import END, START, StateGraph
from langgraph.graph.message import add_messages
from langgraph.graph.state import CompiledStateGraph


class State(TypedDict):
    messages: Annotated[list[BaseMessage], add_messages]
Enter fullscreen mode Exit fullscreen mode

Nodes are points on graphs and in langgraph nodes are represented with functions. Create a new function chatbot that calls OpenAI using llm.ainvoke sending it the current state of stored messages. OpenAI will return a new AI message. Then return the new state update which includes the AI message. This will be appended to the existing messages thanks to the add_messages helper function.

In langgraph they have synchronous and asynchronous versions of methods. If you need to add an asynchronous version just an a in front of the method. In this case we are using the async version ainvoke instead of the sync version invoke.

📝 app/main.py

async def chatbot(state: State) -> State:
    """
    Chatbot
    """
    response_message = await llm.ainvoke(state["messages"])

    return {"messages": [response_message]}

Enter fullscreen mode Exit fullscreen mode

Add the function with add_node to the graph. You will connect the nodes together with edges, when looking at a picture of a graph these are the lines between each node. The first edge always needs to begin with START. Connect the start of the graph (START) to the chatbot node with add_edge. Then compile the graph with graph_builder.compile, the graph needed to be compiled before you can use it.

📝 app/main.py

async def get_graph() -> CompiledStateGraph:
    """
    Get the graph
    """
    graph_builder = StateGraph(State)

    graph_builder.add_node("chatbot", chatbot)

    graph_builder.add_edge(START, "chatbot")
    graph_builder.add_edge("chatbot", END)

    graph = graph_builder.compile()

    return graph
Enter fullscreen mode Exit fullscreen mode

The Graph currently looks like this:

Graph

Create a new async function run_graph using graph.astream (stream if sync) to call the compiled graph, this opens the “chatbot” node which has the value of the function chatbot. It will then insert the system prompt first then the question that the user inputed as the current state. Sending it to openAI where it will return the AI message and your function will append it to the state. We can access the current state in the event variable on each loop of the graph stream. We then extract the value from the event loop. From the value we will extract the content of the last message and it will be returned to the user.

Adding a system prompt is supplying it with information, the current date makes the AI assistant search information more accurate. The LLM is only trained up till a certain time period, making it where it will be more likely to find old article instead of new ones.

📝 app/main.py

from datetime import datetime, timezone
from langchain_core.messages import BaseMessage, HumanMessage, SystemMessage

now = datetime.now(timezone.utc)

system_prompt = f"""
You are an AI assistant helping an user.

Current Date: {now}
"""

async def run_graph(question: str) -> None:
    """
    Run the graph
    """
    async for event in graph.astream(
        {
            "messages": [
                SystemMessage(content=system_prompt),
                HumanMessage(content=question),
            ]
        }
    ):
        for value in event.values():
            print(value["messages"][-1].content)

    return None
Enter fullscreen mode Exit fullscreen mode

Create a new async function main this will be adding user input and adding a while statement which will stop the chat in the terminal if the user types (quit, exit, or q). It will use the function run_graph using the user input and go through the process explained previously.

📝 app/main.py

async def main() -> None:
    """
    AI Agent
    """
    while True:
        question = input("q: ")
        if question.lower() in ["quit", "exit", "q"]:
            print("Goodbye!")
            break

        await run_graph(question)

    return None


if __name__ == "__main__":
    anyio.run(main)
Enter fullscreen mode Exit fullscreen mode

Tool Calling

you will be using your using tavily_api_key from .env and .settings here. Using TavilySearchResults lets your ai assistant look things up on the web. We will put them inside tools which will allow you to easily add more tools later. Then set llm_with_tools = llm.bind_tools which gives your ai assistant your list of tools to use. Next change any following place that had llm to llm_with_tools.

📝 app/main.py

from langchain_community.tools.tavily_search import TavilySearchResults


web_search = TavilySearchResults(max_results=2)
tools = [web_search]

llm = ChatOpenAI(
    openai_api_key=settings.openai_key,
    model_name="gpt-4o-mini",
    max_retries=2,
)

llm_with_tools = llm.bind_tools(tools)

async def chatbot(state: State) -> State:
    """
    Chatbot
    """
    response_message = await llm_with_tools.ainvoke(state["messages"])

    return {"messages": [response_message]}
Enter fullscreen mode Exit fullscreen mode

Next add a tool_node, this adds a node that runs the tool calling for AI messaging. Next add add_conditional_edge this will allow the AI assistant to have an edge that can lead to multiple different nodes depending on what it needs. It will first check if it needs tool calling if not it will send it to the END node.

📝 app/main.py

async def get_graph() -> CompiledStateGraph:
    """
    Get the graph
    """
    graph_builder = StateGraph(State)

    graph_builder.add_node("chatbot", chatbot)

    tool_node = ToolNode(tools=tools)
    graph_builder.add_node("tools", tool_node)

    graph_builder.add_edge(START, "chatbot")

    graph_builder.add_conditional_edges("chatbot", tools_condition)
    graph_builder.add_edge("tools", "chatbot")

    graph = graph_builder.compile()

    return graph
Enter fullscreen mode Exit fullscreen mode

Storing Chat Data

You will be using your mongo_uri from .env and .settings. Now add your async_mongodb_client to the top. Then add a check point to save the memory to mongoDB. You will add this inside of your get_graph function, since it needs to be inside an async function to work.

📝 app/main.py

from typing import Annotated, Any, TypedDict
from langchain_core.runnables import RunnableConfig
from langgraph.checkpoint.mongodb.aio import AsyncMongoDBSaver
from motor.motor_asyncio import AsyncIOMotorClient

async_mongodb_client: AsyncIOMotorClient[Any] = AsyncIOMotorClient(
    settings.mongo_uri.get_secret_value()


async def get_graph() -> CompiledStateGraph:
    """
    Get the graph
    """
    checkpointer = AsyncMongoDBSaver(
        client=async_mongodb_client,
        db_name="ai",
        checkpoint_collection_name="checkpoints",
        writes_collection_name="checkpoint_writes",
    )

    graph_builder = StateGraph(State)

    graph_builder.add_node("chatbot", chatbot)

    tool_node = ToolNode(tools=tools)
    graph_builder.add_node("tools", tool_node)

    graph_builder.add_edge(START, "chatbot")

    graph_builder.add_conditional_edges("chatbot", tools_condition)
    graph_builder.add_edge("tools", "chatbot")

    graph = graph_builder.compile(checkpointer=checkpointer)

    return graph

Enter fullscreen mode Exit fullscreen mode

Now configure your thread_id using MongoDB for storing messages across different chat histories. This setup enables the assistant to remember conversations across multiple sessions and allows you to create separate chat histories by using a different thread_id.

📝 app/main.py

from langchain_core.runnables import RunnableConfig


async def run_graph(config: RunnableConfig, question: str) -> None:
    """
    Run the graph
    """
    graph = await get_graph()

    async for event in graph.astream(
        {
            "messages": [
                SystemMessage(content=system_prompt),
                HumanMessage(content=question),
            ]
        },
        config=config,
        stream_mode="values",
    ):
        event["messages"][-1].pretty_print()

    return None


async def main() -> None:
    """
    AI Agent
    """
    config = RunnableConfig(configurable={"thread_id": 1})

    while True:
        question = input("q: ")
        if question.lower() in ["quit", "exit", "q"]:
            print("Goodbye!")
            break

        await run_graph(config=config, question=question)

    return None

Enter fullscreen mode Exit fullscreen mode

You can run the script and view the output, since we gave it the current date, you can see it gets the most recent information about the team.

$ python3 -m app.main

q: what is the carolina panthers current record?                
================================ Human Message =================================

what is the carolina panthers current record?
================================== Ai Message ==================================
Tool Calls:
  tavily_search_results_json (call_apvK6LYyrTMRunPcoO8enCqD)
 Call ID: call_apvK6LYyrTMRunPcoO8enCqD
  Args:
    query: Carolina Panthers current record December 2024
================================= Tool Message =================================
Name: tavily_search_results_json

[{"url": "https://www.footballdb.com/teams/nfl/carolina-panthers/results", "content": "View the 2024 Carolina Panthers schedule, results and scores for regular season, preseason and postseason NFL games. ... 2024 record: 3-11 (4th in NFC South) Draft; Splits; Transactions; Injuries; ... December 29, 2024 1:00 PM; Carolina (3-11)--Tampa Bay (8-6)--Last Meeting: TB 26 @ CAR 23 (12/1/2024)"}, {"url": "https://champsorchumps.us/team/nfl/carolina-panthers/2024", "content": "The Carolina Panthers currently have a 3-9 record. The 2024 Panthers are 2-5 at home and 1-4 on the road. What division do the 2024 Carolina Panthers play in? The 2024 Carolina Panthers played in the South Division of the National Football Conference."}]
================================== Ai Message ==================================

As of December 2024, the Carolina Panthers have a record of **3 wins and 11 losses** (3-11). They are currently in 4th place in the NFC South division.
Enter fullscreen mode Exit fullscreen mode

You can view the run step by step on your langsmith project.

Langsmith trace

This is the basics on how to get an AI agent that can be used to decide on its own if it needs tool calling and can look up information on the web. as well as store information for each thread allowing your assistant to remember previous conversations. Since we are using langchain it will be easy to add other functions like streaming, multi-model support and other llms in the future.

Top comments (1)

Collapse
 
prodevstaff profile image
ProDev. Staff