Introduction
In Part 1, we built a simple AI-powered assistant using LangGraph, Next.js, TailwindCSS, and Pinecone. Our AI agent could process user queries, retrieve relevant knowledge from Pinecone, and generate responses using OpenAI’s GPT model.
Now, in Part 2, we’ll enhance our AI agent by adding multi-agent workflows, contextual memory, and real-time conversation capabilities. By the end of this tutorial, you’ll have a more advanced AI assistant that can handle ongoing interactions dynamically.
Enhancements & Features in Part 2
✅ Multi-Agent Workflow – Implementing additional AI agents for different tasks.
✅ Long-Term Memory – Storing past interactions in Pinecone for context-aware conversations.
✅ Real-Time Conversation UI – Improving the chat experience with live responses.
✅ API Call Capabilities – Allowing AI agents to fetch data from external sources.
Step 1: Implementing Multi-Agent Workflows with LangGraph
AI agents often need to delegate tasks to specialized agents. For example, one agent may handle general queries, while another focuses on document analysis or data retrieval.
1. Modify the AI Agent Workflow in LangGraph
Create a new file /lib/multiAgentGraph.ts
:
import { OpenAI } from "openai";
import { Pinecone } from "@pinecone-database/pinecone";
import { Graph, Node, Edge } from "langgraph";
type AIContext = { query: string; response: string; memory: string[] };
const openai = new OpenAI(process.env.NEXT_PUBLIC_OPENAI_API_KEY!);
const pinecone = new Pinecone({
apiKey: process.env.NEXT_PUBLIC_PINECONE_API_KEY!,
environment: process.env.NEXT_PUBLIC_PINECONE_ENVIRONMENT!,
});
const aiGraph = new Graph<AIContext>();
// General Query Agent
const generalAgent = new Node(async (ctx) => {
const response = await openai.chat.completions.create({
model: "gpt-4",
messages: [{ role: "user", content: ctx.query }],
});
ctx.response = response.choices[0].message.content;
});
// Memory Storage Agent
const memoryAgent = new Node(async (ctx) => {
const vectorStore = pinecone.index(process.env.NEXT_PUBLIC_PINECONE_INDEX!);
await vectorStore.upsert([{ id: Date.now().toString(), values: ctx.query }]);
ctx.memory.push(ctx.query);
});
aiGraph.addNode("General Query Agent", generalAgent);
aiGraph.addNode("Memory Agent", memoryAgent);
aiGraph.addEdge(new Edge("Start", "General Query Agent"));
aiGraph.addEdge(new Edge("General Query Agent", "Memory Agent"));
export { aiGraph };
This setup ensures that every user query is processed by the General Query Agent and stored for future reference in Pinecone.
Step 2: Upgrading the API to Support Contextual Conversations
Update /pages/api/ask.ts
to retrieve memory and allow AI to recall past interactions:
import { NextApiRequest, NextApiResponse } from "next";
import { aiGraph } from "../../lib/multiAgentGraph";
import { Pinecone } from "@pinecone-database/pinecone";
const pinecone = new Pinecone({
apiKey: process.env.NEXT_PUBLIC_PINECONE_API_KEY!,
environment: process.env.NEXT_PUBLIC_PINECONE_ENVIRONMENT!,
});
export default async function handler(req: NextApiRequest, res: NextApiResponse) {
if (req.method !== "POST") return res.status(405).json({ error: "Method Not Allowed" });
const { query } = req.body;
if (!query) return res.status(400).json({ error: "Missing query" });
const vectorStore = pinecone.index(process.env.NEXT_PUBLIC_PINECONE_INDEX!);
const previousContext = await vectorStore.query({ topK: 5, query });
const memory = previousContext.matches.map((match) => match.id);
const context = { query, response: "", memory };
await aiGraph.run(context);
return res.json({ response: context.response });
}
Now, our AI agent can retrieve previous interactions, making conversations more natural and contextual.
Step 3: Enhancing the Frontend UI for Real-Time Chat
We’ll modify our chat interface to display AI responses dynamically.
Update pages/index.tsx
import { useState } from "react";
export default function Home() {
const [query, setQuery] = useState("");
const [conversation, setConversation] = useState<{ user: string; ai: string }[]>([]);
const handleSubmit = async () => {
const res = await fetch("/api/ask", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ query }),
});
const data = await res.json();
setConversation([...conversation, { user: query, ai: data.response }]);
setQuery("");
};
return (
<div className="container mx-auto p-6">
<h1 className="text-2xl font-bold">AI Agent Chat</h1>
<div className="chat-box mt-4 p-4 border rounded">
{conversation.map((msg, index) => (
<div key={index} className="mt-2">
<p className="text-blue-600">User: {msg.user}</p>
<p className="text-gray-700">AI: {msg.ai}</p>
</div>
))}
</div>
<input
type="text"
placeholder="Ask something..."
value={query}
onChange={(e) => setQuery(e.target.value)}
className="w-full p-2 border rounded mt-4"
/>
<button onClick={handleSubmit} className="mt-4 bg-blue-500 text-white p-2 rounded">Ask</button>
</div>
);
}
Possible Enhancements:
✅ Implement external API integrations (e.g., fetching real-world data).
✅ Introduce multi-modal AI (support for images, voice, and documents).
✅ Deploy the AI agent on Vercel for production use.
✅ Improve UI/UX for a more interactive chat experience.
✅ Add personalized AI responses based on user profiles.
This AI-powered assistant is now ready for real-world applications like customer support, automation, and intelligent task execution. Ready to take it further? Let’s keep building! 💡
🔗 join the newsletter: https://bobbyhalljr.dev/newsletter
Top comments (0)