DEV Community

JeffMint
JeffMint

Posted on

An AI Chatbot with NextJS, Groq and Llama

AI Chatbot Documentation

This documentation provides a detailed explanation of an AI chatbot built using Next.js, Groq, and the Llama language model. The chatbot is designed to assist users in various domains, including academics, coding, and machine learning.


Table of Contents

  1. Introduction
  2. Core Components
  3. Code Explanation
  4. How It Works
  5. Setup and Deployment

Introduction

The AI chatbot leverages the power of Llama (a large language model) to generate responses based on user input. It uses Groq as the backend service for managing the Llama model and Next.js for building the frontend and backend together seamlessly. The chatbot is designed to be interactive, responsive, and user-friendly.

Core Components

Server-Side API

The server-side API handles incoming requests from the client, processes the messages, interacts with the Groq service, and streams the response back to the client.

Client-Side Interface

The client-side interface is built using React components and provides a clean, intuitive UI for interacting with the chatbot. It includes features such as real-time streaming of responses, message history, and a sleek design.

Code Explanation

Server-Side Code

Below is the server-side code responsible for handling chat requests and interacting with the Groq service.

import Groq from 'groq-sdk';
const groq = new Groq({
  apiKey: process.env.NEXT_PUBLIC_GROQ_API_KEY
});

const systemPrompt = 
  "You are a friendly and knowledgeable academic assistant, " +
  "coding assistant and a teacher of anything related to AI and Machine Learning. " +
  "Your role is to help users with anything related to academics, " +
  "provide detailed explanations, and support learning across various domains.";

export async function POST(request) {
  try {
    const { messages, msg } = await request.json();

    // Safely handle undefined or null messages
    const processedMessages = messages && Array.isArray(messages) 
      ? messages.reduce((acc, m) => {
          if (m && m.parts && m.parts[0] && m.parts[0].text) {
            acc.push({
              role: m.role === "model" ? "assistant" : "user",
              content: m.parts[0].text
            });
          }
          return acc;
        }, [])
      : [];

    const enhancedMessages = [
      { role: "system", content: systemPrompt },
      ...processedMessages,
      { role: "user", content: msg }
    ];

    const stream = await groq.chat.completions.create({
      messages: enhancedMessages,
      model: "llama3-8b-8192", // Choose your preferred model
      stream: true,
      max_tokens: 1024,
      temperature: 0.7,
    });

    // Create a custom readable stream to parse the chunks
    const responseStream = new ReadableStream({
      async start(controller) {
        const encoder = new TextEncoder();

        try {
          for await (const chunk of stream) {
            const content = chunk.choices[0]?.delta?.content;

            if (content) {
              controller.enqueue(encoder.encode(content));
            }
          }
        } catch (error) {
          console.error("Streaming error:", error);
          controller.error(error);
        } finally {
          controller.close();
        }
      }
    });

    return new Response(responseStream);
  } catch (error) {
    console.error("Error in chat API:", error);
    return new Response(JSON.stringify({ 
      error: "An error occurred processing your request",
      details: error.message 
    }), {
      status: 500,
      headers: { 'Content-Type': 'application/json' }
    });
  }
}
Enter fullscreen mode Exit fullscreen mode

Explanation:

  1. Importing Dependencies:

    • Groq is imported from the groq-sdk package to interact with the Groq service.
    • The POST function is defined to handle incoming HTTP POST requests.
  2. System Prompt:

    • A predefined system prompt is set up to guide the behavior of the Llama model. This ensures the chatbot remains helpful and knowledgeable.
  3. Processing Messages:

    • The messages array from the request is processed to extract relevant content. Each message is converted into a format compatible with the Groq API.
  4. Enhancing Messages:

    • The system prompt is added to the list of messages, followed by the user's latest message.
  5. Creating Completion Stream:

    • The groq.chat.completions.create method is used to generate a response from the Llama model. The response is streamed back to the client in real-time.
  6. Readable Stream:

    • A custom ReadableStream is created to parse the streamed response and send it back to the client incrementally.
  7. Error Handling:

    • Errors during the process are caught and logged, and an appropriate error response is sent back to the client.

Client-Side Code

Below is the client-side code for the chat interface.

'use client'
import React, { useEffect, useRef, useState } from 'react';
import { Textarea } from "@/components/ui/textarea";
import { Button } from "@/components/ui/button";
import { ScrollArea } from "@/components/ui/scroll-area";
import { Send, Bot, User, Zap } from "lucide-react";
import ReactMarkdown from 'react-markdown';

const ChatInterface = () => {
  const [messages, setMessages] = useState([
    {
      role: "model",
      parts: [{ text: "Hello! I'm ready to assist you. What would you like to explore today?" }],
    },
  ]);
  const [message, setMessage] = useState("");
  const [isLoading, setIsLoading] = useState(false);
  const messagesEndRef = useRef(null);

  const scrollToBottom = () => {
    messagesEndRef.current?.scrollIntoView({ behavior: "smooth" });
  };

  useEffect(() => {
    scrollToBottom();
  }, [messages]);

  const sendMessage = async () => {
    if (!message.trim() || isLoading) return;
    setIsLoading(true);
    setMessage("");

    setMessages((messages) => [
      ...messages,
      { role: "user", parts: [{ text: message }] },
      { role: "model", parts: [{ text: "" }] },
    ]);

    try {
      const response = await fetch("/api/chat", {
        method: "POST",
        headers: { "Content-Type": "application/json" },
        body: JSON.stringify({
          history: [...messages],
          msg: message,
        }),
      });

      if (!response.ok) throw new Error("Network response was not ok");

      if (response.body) {
        const reader = response.body.getReader();
        const decoder = new TextDecoder();
        let fullResponse = "";

        while (true) {
          const { done, value } = await reader.read();
          if (done) break;

          const text = decoder.decode(value || new Uint8Array(), { stream: true });
          fullResponse += text;

          setMessages((messages) => {
            const lastMessage = messages[messages.length - 1];
            const otherMessages = messages.slice(0, messages.length - 1);
            return [
              ...otherMessages,
              {
                ...lastMessage,
                parts: [{ text: fullResponse }],
              },
            ];
          });
        }
      }
    } catch (error) {
      console.error("Error:", error);
      setMessages((messages) => [
        ...messages,
        {
          role: "model",
          parts: [{ text: "Apologies, an unexpected error occurred. Please try again." }],
        },
      ]);
    }

    setIsLoading(false);
  };

  const handleKeyPress = (e) => {
    if (e.key === "Enter" && !e.shiftKey) {
      e.preventDefault();
      sendMessage();
    }
  };

  return (
    <div className="min-h-screen bg-[#1C1C1E] flex items-center justify-center p-4">
      <div className="w-full max-w-4xl bg-[#2C2C2E] rounded-2xl shadow-2xl border border-[#3A3A3C] overflow-hidden">
        {/* Header */}
        <div className="bg-[#3A3A3C] p-4 md:p-6 flex justify-between items-center">
          <div className="flex items-center gap-3">
            <div className="bg-[#4A4A4C] p-2 rounded-full">
              <Zap className="w-6 h-6 text-[#5E5CE6]" />
            </div>
            <h1 className="text-xl md:text-2xl font-bold text-white">AI Companion</h1>
          </div>
        </div>

        {/* Chat Area */}
        <div className="p-4 md:p-6 flex flex-col h-[75vh]">
          <ScrollArea className="flex-grow mb-4 pr-4">
            <div className="space-y-4">
              {messages.map((message, index) => (
                <div
                  key={index}
                  className={`flex items-start ${
                    message.role === "user" ? "justify-end" : "justify-start"
                  }`}
                >
                  <div
                    className={`flex gap-3 max-w-[85%] md:max-w-[75%] ${
                      message.role === "user" ? "flex-row-reverse" : "flex-row"
                    }`}
                  >
                    <div className="flex h-10 w-10 shrink-0 select-none items-center justify-center rounded-full bg-[#3A3A3C]">
                      {message.role === "user" ? (
                        <User className="h-5 w-5 text-[#5E5CE6]" />
                      ) : (
                        <Bot className="h-5 w-5 text-[#5E5CE6]" />
                      )}
                    </div>
                    <div
                      className={`rounded-2xl px-4 py-3 shadow-lg ${
                        message.role === "user"
                          ? "bg-[#5E5CE6] text-white"
                          : "bg-[#3A3A3C] text-white"
                      }`}
                    >
                      <div className="prose prose-invert max-w-none">
                        <ReactMarkdown>{message.parts[0].text}</ReactMarkdown>
                      </div>
                    </div>
                  </div>
                </div>
              ))}
              {isLoading && (
                <div className="flex justify-center">
                  <div className="animate-pulse rounded-full h-8 w-8 bg-[#5E5CE6]" />
                </div>
              )}
              <div ref={messagesEndRef} />
            </div>
          </ScrollArea>

          {/* Input Area */}
          <div className="flex gap-3 mt-auto">
            <Textarea
              value={message}
              onChange={(e) => setMessage(e.target.value)}
              onKeyDown={handleKeyPress}
              placeholder="Type your message..."
              className="h-[40px] text-white bg-[#3A3A3C] border-gray-100 focus:border-blue-200 resize-none rounded-xl"
              disabled={isLoading}
            />
            <Button
              size="icon"
              className="h-[60px] w-[60px] bg-[#5E5CE6] hover:bg-[#4B3FD6] rounded-xl transition-all duration-300 ease-in-out transform hover:scale-105"
              onClick={sendMessage}
              disabled={!message.trim() || isLoading}
            >
              <Send className="h-5 w-5 text-white" />
            </Button>
          </div>
        </div>
      </div>
    </div>
  );
};

export default ChatInterface;
Enter fullscreen mode Exit fullscreen mode

Explanation:

  1. State Management:

    • messages: Stores the conversation history between the user and the chatbot.
    • message: Holds the current user input.
    • isLoading: Indicates whether the chatbot is generating a response.
  2. Scroll Behavior:

    • The scrollToBottom function ensures that the chat area scrolls to the bottom whenever a new message is added.
  3. Sending Messages:

    • The sendMessage function sends the user's message to the server via an API call and updates the conversation history with the bot's response.
  4. Real-Time Streaming:

    • The response from the server is streamed incrementally, and the chat interface updates in real-time as the bot generates its response.
  5. UI Components:

    • The chat interface includes a header, a scrollable chat area, and an input area for sending messages.
    • Icons (User, Bot, Zap, Send) are used to enhance the visual appeal and provide clear indicators of who is speaking.
  6. Error Handling:

    • If an error occurs during the API call, the chatbot displays an appropriate error message.

How It Works

  1. User Interaction:

    • The user types a message in the input area and submits it either by pressing the "Send" button or hitting the Enter key.
  2. Server Processing:

    • The server receives the message, processes it, and sends it to the Groq service for completion generation.
  3. Response Streaming:

    • The generated response is streamed back to the client in real-time, updating the chat interface dynamically.
  4. Display:

    • The chat interface displays the conversation history, distinguishing between user and bot messages using different colors and icons.

Setup and Deployment

Prerequisites

  • Node.js and npm installed on your machine.
  • A Groq API key with access to the Llama model.

Steps

  1. Clone the Repository:
   git clone https://github.com/Minty-cyber/AI-Chatbot
   cd chatbot
Enter fullscreen mode Exit fullscreen mode
  1. Install Dependencies:
   npm install
Enter fullscreen mode Exit fullscreen mode
  1. Set Environment Variables:

     NEXT_PUBLIC_GROQ_API_KEY=your-groq-api-key
    
  2. Run the Application:

   npm run dev
Enter fullscreen mode Exit fullscreen mode
  1. Deploy:
    • Use platforms like Vercel or Netlify to deploy the application.

Top comments (0)