DEV Community

Cover image for How to Build a RAG Chatbot with Agent Cloud and MongoDB
Ankur Tyagi for AgentCloud

Posted on • Originally published at agentcloud.dev

How to Build a RAG Chatbot with Agent Cloud and MongoDB

Introduction

Enterprises are constantly seeking ways to improve efficiency, gain a competitive edge, and deliver exceptional customer service. Retrieval-Augmented Generation (RAG) technology is emerging as a powerful tool that addresses these needs by combining information retrieval with AI generation. This innovative approach unlocks a range of benefits that can significantly transform how enterprises operate.

One of the most impactful applications of RAG lies in enhanced customer support. By retrieving information from a company's knowledge base or past customer interactions, RAG can empower chatbots and virtual assistants to provide more accurate and contextually relevant responses. This translates to faster resolution times, improved customer satisfaction, and a reduction in the burden on human support teams.

Another key advantage of RAG is its ability to streamline knowledge management. Enterprises often struggle with vast amounts of unstructured data stored in documents, emails, and reports. RAG tackles this challenge by enabling users to quickly retrieve the information they need. This empowers employees to find answers to internal queries, access relevant documents for decision-making, and conduct research more efficiently, ultimately boosting overall productivity.

RAG goes beyond simply retrieving and generating information. It can also play a crucial role in data analysis. By identifying relevant data points and insights from large datasets, RAG can automate parts of the data analysis pipeline, leading to faster extraction of actionable insights. This empowers enterprises to make data-driven decisions with greater speed and accuracy.

Image description

In this blog, we will learn to build a RAG chatbot in minutes using AgentCloud and MongoDB.

AgentCloud is an open-source platform enabling companies to build and deploy private LLM chat apps, empowering teams to securely interact with their data. AgentCloud internally uses Airbyte to build data pipelines allowing us to split, chunk, and embed data from over 300 data sources, including NoSQL databases like MongoDB. It simplifies the process of ingesting data into the vector store for the initial setup and subsequent scheduled updates, ensuring that the vector store information is always updated. AgentCloud uses Qdrant as the vector store to efficiently store and manage large sets of vector embeddings. For a given user query the RAG application fetches relevant documents from vector store by analyzing how similar their vector representation is compared to the query vector.


Setting up Agent Cloud via Docker

To run AgentCloud in Local you must have Docker installed in your system. Then you can execute the below steps to run AgentCloud.

  • clone the repo:

git clone https://github.com/rnadigital/agentcloud.git

  • go to the agentcloud directory:

cd agentcloud

  • run locally using this command:

chmod +x install.sh && ./install.sh

Here is a Quickstart Guide in Docs.

On running the install command it will download the required docker images and start the containers in Docker.

Image description

Once the install script is executed successfully we can view the containers running in the docker app:

Image description

To access Agent Cloud in browser we can hit the below url:
http://localhost:3000/register

Image description

Next, we need to sign to the platform.

Image description

Post sign-up, log in to the App, to get to this landing screen.

Image description

Congrats. Our setup is complete now.

Next, we will move towards building our RAG application.


Adding New Model

Agent Cloud allows us to use models like FastEmbed and OpenAI in our app.

To add a new model let's go to the Models screen and click the Add Model option.

Image description

In the configure screen you can select Model, I have selected fast-bge-small-en model to embed the text content.

Then click on the Save button to complete the model setup.

FastEmbed, a lightweight library with minimal dependencies, is ideal for serverless environments like AWS Lambda. The core model, fast-bge-small-en, efficiently captures text meaning for tasks like classification and retrieval due to its compact size. This combination offers developers a powerful solution for real-time text analysis in serverless deployments.

Image description

Post successfully adding the model we will be able to view the model in the Models list.

Image description


Creating DataSource

We will be using MongoDB as our data source.

MongoDB is a NoSQL database, that offers a flexible alternative to traditional relational databases. Unlike relational databases with rigid schemas, MongoDB stores data in JSON-like documents, allowing for easy adaptation to ever-changing data structures.

In our MongoDB, we have a database called course_db which contains a collection called course_catalog.

Inside this collection, we have stored different course information.

Image description

There are multiple fields in each document but the fields which we are interested in are:

  1. title
  2. description
  3. level
  4. duration
  5. skills_covered
  6. url
  7. meta_data

To access and utilize MongoDB data within the RAG, we'll create a MongoDB data source.

First we need to go to the Data Sources page and click on the New Connection button.

Image description

We will select MongoDB as the Datasource.

Image description

We will select the Datasource Name as course_db_mongo which is derived from the database name. We will add a short description of the new data source. We have kept the Schedule Type as Manual which means the MongoDB data will get synced to the vectorstore manually.

Image description

I am running MongoDB on my local machine with Docker. For connecting Airbyte to MongoDB we need to provide the MongoDB connection string and the Mongo database name. In the cluster type, I have selected Self-Managed Replica Set since MongoDB is running in my local. Rest we can keep the default value as it is.

Image description

Next, we need to select the collection that we want to sync, which is course_catalog, we will be syncing all the fields to the vector store.

Image description

Post that we need to select the field to be embedded and click continue. The meta_data field in the Mongo DB has all the relevant information required so we will select this field for embedding.

Image description

The data source is created now. In the first run, it embeds the Mongo data and stores it in the Qdrant vector store.

Image description

We can check the Qdrant DB running in our local to verify the data sync.

The Qdrant DB is running in port 6333 and can be accessed from the below link.
http://localhost:6333/dashboard#/collections

On the collection page we can see a new collection is created.

Image description

As the data syncs, this collection gets populated with the documents.

Image description


Setting up tools

Tools are an essential component for enabling the AI agent to interact with its environment effectively, process information, and take appropriate actions to achieve its goals. The tools used by an AI agent can include functions, APIs, data sources, and other resources that help the agent perform specific tasks autonomously and efficiently.

The tool we will be setting up will be responsible for querying the data source and fetching relevant documents. Agent Cloud by default creates a tool for us when a new data source is added. In the below screenshot, we can see that the course_db_mongo tool has already been created for us by the platform.

Image description

The AI agents read the tool's description and make a judgment about using the tool for the task, so we need to make sure that the description of the tool covers all the information that the agent would require.

Image description


Creating Agent

An AI Agent is a sophisticated system that utilizes LLM technology to reason through problems, create plans to solve these problems and execute these plans with the assistance of various tools. These agents are characterized by their complex reasoning capabilities, memory functions, and the ability to execute tasks autonomously.

For creating the agent we will first go to the Agents page and then click on the New Agent button.

Image description

This brings us to the agent configuration page where we define the Name, Role, Goal, and Backstory of an Agent. We have selected both the Model and Function Calling Model as Open AI GPT 4.

In the Tools section, we will select the course_db_mongo tool.

Image description

If you don’t have the Open AI GPT 4 model configured then you can click on the Model option and add a new model. A modal for configuring new model opens and there you can add the name of the model, the model type, the Credentials which will be the OpenAI API key, and finally the LLM model. On clicking the save button the Open AI GPT 4 model we be configured.

Image description

Now click on the Save button on the agent configuration page and a new agent will be created for us.

Image description


Creating Task

Tasks are specific assignments assigned to an agent for completion. For creating a new task we need to click the Add Task button in the Task screen.

Image description

In the Task configuration page, we need to define the Name and Task Description.

We will be selecting the Tools and Preferred Agent as course_db_mongo and Course Information Agent.

Image description

On clicking the Save button a new task will be created for us.

Image description


Creating App

We will now look into the App creation part.

In our app, we bind the Agent and Task together to create a conversational RAG. This RAG will help users in answering questions related to courses.

In the app configuration we will select the App Type as Conversation Chat App.

The Task will be the Course Information Taskwhich we created before and the Agent will be the Course Information Agent. We will want the App to process tasks sequentially so we will select the Process as sequential. And finally, we will select the LLM model as OpenAI GPT 4. Then we can click on the save button to save our configuration.

Image description

Now lets test our App, on click the play button it will open a chat window for us where we can have conversation.

Image description

let's check if there are any Python courses on the list.

The Agent uses the course_db_mongo tool to retrieve the Python courses.

Image description

let's see another example where we can inquiring for any beginner course on Google Workspace and agent was able to retrieve the course with the difficulty level as beginner.

Image description

Ok, Ok last example in which let's try where we will trying to fetch a web development course with a duration of 2 weeks.

Image description

That's it for today.

🌐 Also, don't forget to check out our open source GitHub repository: AgentCloud GitHub Repo.

If you like what you see, give us a ⭐. Just click on the cat.

Thanks. You're cool.


Conclusion

In this blog, we learned to build a RAG chat app with Agent Cloud and MongoDB.

We covered:

  • How to create a data source
  • How to do embedding
  • How to store it in Qdrant DB
  • How to build tools for Agents
  • How to create an app where users can interact with their private data using Agent Cloud

πŸ” Want to learn more about Agent Cloud? read other blogs.

  1. Agent Cloud vs. CrewAI
  2. Agent Cloud vs. OpenAI
  3. Agent Cloud vs. Vertex AI Agents
  4. How to Build a RAG Chatbot using Agent Cloud and BigQuery

Top comments (0)