Introduction
AI agents are evolving beyond simple chatbots into powerful workflow orchestrators. In this guide, we'll build a fully functional AI agent using LangGraph, TypeScript, Next.js, TailwindCSS, and Pinecone for vector-based memory storage.
By the end of this tutorial, you'll have a working AI-powered assistant that can process user queries, retrieve knowledge from a Pinecone vector database, and dynamically generate responses.
Project Architecture
Tech Stack Overview
✅ LangGraph – To define AI workflows and agent orchestration.
✅ Next.js (TypeScript) – For the front-end and API layer.
✅ TailwindCSS – For UI styling.
✅ Pinecone – For vector-based memory storage and retrieval.
✅ OpenAI API – For LLM-based response generation.
Step 1: Setting Up the Project
1. Initialize the Next.js Project
npx create-next-app@latest ai-agent-app --typescript --tailwind --use-npm
cd ai-agent-app
npm install @pinecone-database/pinecone langgraph openai axios dotenv
2. Configure Pinecone for Vector Storage
Create a .env.local
file and add:
NEXT_PUBLIC_OPENAI_API_KEY=your_openai_api_key
NEXT_PUBLIC_PINECONE_API_KEY=your_pinecone_api_key
NEXT_PUBLIC_PINECONE_ENVIRONMENT=your_pinecone_env
NEXT_PUBLIC_PINECONE_INDEX=your_pinecone_index
Step 2: Building the AI Agent Backend with LangGraph
1. Create LangGraph Workflow (/lib/langgraph.ts
)
import { OpenAI } from "openai";
import { Pinecone } from "@pinecone-database/pinecone";
import { Graph, Node, Edge } from "langgraph";
type AIContext = { query: string; response: string };
const openai = new OpenAI(process.env.NEXT_PUBLIC_OPENAI_API_KEY!);
const pinecone = new Pinecone({
apiKey: process.env.NEXT_PUBLIC_PINECONE_API_KEY!,
environment: process.env.NEXT_PUBLIC_PINECONE_ENVIRONMENT!,
});
const aiGraph = new Graph<AIContext>();
// Define AI processing node
const aiNode = new Node(async (ctx) => {
const vectorStore = pinecone.index(process.env.NEXT_PUBLIC_PINECONE_INDEX!);
const results = await vectorStore.query({ topK: 5, query: ctx.query });
const messages = [
{ role: "system", content: "You are an AI assistant." },
{ role: "user", content: `${ctx.query}\nRelevant Data: ${JSON.stringify(results)}` },
];
const completion = await openai.chat.completions.create({
model: "gpt-4",
messages,
});
ctx.response = completion.choices[0].message.content;
});
aiGraph.addNode("AI Processing", aiNode);
aiGraph.addEdge(new Edge("Start", "AI Processing"));
export { aiGraph };
Step 3: Creating the Next.js API Route
Create a new API route in /pages/api/ask.ts
:
import { NextApiRequest, NextApiResponse } from "next";
import { aiGraph } from "../../lib/langgraph";
export default async function handler(req: NextApiRequest, res: NextApiResponse) {
if (req.method !== "POST") return res.status(405).json({ error: "Method Not Allowed" });
const { query } = req.body;
if (!query) return res.status(400).json({ error: "Missing query" });
const context = { query, response: "" };
await aiGraph.run(context);
return res.json({ response: context.response });
}
Step 4: Building the Frontend UI with TailwindCSS
Update the pages/index.tsx
file:
import { useState } from "react";
export default function Home() {
const [query, setQuery] = useState("");
const [response, setResponse] = useState("");
const handleSubmit = async () => {
const res = await fetch("/api/ask", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ query }),
});
const data = await res.json();
setResponse(data.response);
};
return (
<div className="container mx-auto p-6">
<h1 className="text-2xl font-bold">AI Agent Chat</h1>
<input
type="text"
placeholder="Ask a question..."
value={query}
onChange={(e) => setQuery(e.target.value)}
className="w-full p-2 border rounded mt-4"
/>
<button onClick={handleSubmit} className="mt-4 bg-blue-500 text-white p-2 rounded">Ask</button>
{response && <p className="mt-4 p-3 bg-gray-100 rounded">{response}</p>}
</div>
);
}
Final Steps: Running the AI Agent
1. Start the Next.js Server
npm run dev
2. Test the AI Agent
- Open
http://localhost:3000
- Enter a query and see AI-powered responses with knowledge retrieval from Pinecone.
Conclusion & Next Steps
You now have a fully functional AI agent built with LangGraph, Pinecone, and Next.js! 🚀
Possible Enhancements:
✅ Add multi-agent workflows using LangGraph.
✅ Implement memory storage for better contextual conversations.
✅ Deploy to Vercel or a cloud-based API for scalability.
This AI-powered assistant can be extended to handle business automation, document analysis, and intelligent decision-making. Ready to take it further? Let’s build the future of AI together!
🔗 Stay updated with the latest AI projects—subscribe to my newsletter!
Top comments (0)