DEV Community

KAMAL KISHOR
KAMAL KISHOR

Posted on

Building a RAG (Retrieval-Augmented Generation) Application Using Deep Seek R1 from Scratch

Retrieval-Augmented Generation (RAG) combines retrieval systems with generative models to provide more accurate, context-rich answers. Deep Seek R1 is a powerful tool that helps us build such systems efficiently by integrating retrieval capabilities with advanced language models. In this blog, we’ll walk through the process of creating a RAG application from scratch using Deep Seek R1.


1. Understanding the Architecture of RAG

RAG applications are built around three primary components:

  1. Retriever: Finds relevant documents from a knowledge base.
  2. Generator: Uses retrieved documents as context to generate answers.
  3. Knowledge Base: Stores all the documents or information in an easily retrievable format.

2. Setting Up the Environment

Step 1: Install Required Dependencies

To get started, ensure you have Python installed. Then, set up the required libraries, including Deep Seek R1. Install the dependencies using the following commands:

pip install deep-seek-r1 langchain transformers sentence-transformers faiss-cpu
Enter fullscreen mode Exit fullscreen mode

Step 2: Initialize the Project

Create a new project directory and set up a virtual environment for isolation.

mkdir rag-deepseek-app
cd rag-deepseek-app
python -m venv venv
source venv/bin/activate  # or venv\Scripts\activate for Windows
Enter fullscreen mode Exit fullscreen mode

3. Building the Knowledge Base

The knowledge base is the heart of a RAG system. For this example, we’ll use text documents, but you can extend it to PDFs, databases, or other formats.

Step 1: Prepare the Data

Organize your documents in a folder named data.

rag-deepseek-app/
└── data/
    ├── doc1.txt
    ├── doc2.txt
    └── doc3.txt
Enter fullscreen mode Exit fullscreen mode

Step 2: Embed the Documents

Use Deep Seek R1 to embed the documents for efficient retrieval.

from deep_seek_r1 import DeepSeekRetriever
from sentence_transformers import SentenceTransformer
import os

# Load the embedding model
embedding_model = SentenceTransformer('all-MiniLM-L6-v2')

# Prepare data
data_dir = './data'
documents = []
for file_name in os.listdir(data_dir):
    with open(os.path.join(data_dir, file_name), 'r') as file:
        documents.append(file.read())

# Embed the documents
embeddings = embedding_model.encode(documents, convert_to_tensor=True)

# Initialize the retriever
retriever = DeepSeekRetriever()
retriever.add_documents(documents, embeddings)
retriever.save('knowledge_base.ds')  # Save the retriever state
Enter fullscreen mode Exit fullscreen mode

4. Building the Retrieval and Generation Pipeline

Now, we’ll set up the pipeline to retrieve relevant documents and generate responses.

Step 1: Load the Retriever

retriever = DeepSeekRetriever.load('knowledge_base.ds')
Enter fullscreen mode Exit fullscreen mode

Step 2: Integrate the Generator

We’ll use OpenAI’s GPT-based models or Hugging Face Transformers for generation.

from transformers import AutoModelForCausalLM, AutoTokenizer

# Load the generator model
generator_model = AutoModelForCausalLM.from_pretrained("gpt2")
tokenizer = AutoTokenizer.from_pretrained("gpt2")

def generate_response(query, retrieved_docs):
    # Combine the query and retrieved documents
    input_text = query + "\n\n" + "\n".join(retrieved_docs)

    # Tokenize and generate a response
    inputs = tokenizer.encode(input_text, return_tensors='pt', max_length=512, truncation=True)
    outputs = generator_model.generate(inputs, max_length=150, num_return_sequences=1)

    return tokenizer.decode(outputs[0], skip_special_tokens=True)
Enter fullscreen mode Exit fullscreen mode

5. Querying the System

Here’s how we put everything together to handle user queries.

def rag_query(query):
    # Retrieve relevant documents
    retrieved_docs = retriever.search(query, top_k=3)

    # Generate a response
    response = generate_response(query, retrieved_docs)

    return response
Enter fullscreen mode Exit fullscreen mode

Example Query

query = "What is the impact of climate change on agriculture?"
response = rag_query(query)
print(response)
Enter fullscreen mode Exit fullscreen mode

6. Deploying the Application

To make the RAG system accessible, you can deploy it using Flask or FastAPI.

Step 1: Set Up Flask

Install Flask:

pip install flask
Enter fullscreen mode Exit fullscreen mode

Create a app.py file:

from flask import Flask, request, jsonify
from deep_seek_r1 import DeepSeekRetriever
from transformers import AutoModelForCausalLM, AutoTokenizer

# Initialize components
retriever = DeepSeekRetriever.load('knowledge_base.ds')
generator_model = AutoModelForCausalLM.from_pretrained("gpt2")
tokenizer = AutoTokenizer.from_pretrained("gpt2")

def generate_response(query, retrieved_docs):
    input_text = query + "\n\n" + "\n".join(retrieved_docs)
    inputs = tokenizer.encode(input_text, return_tensors='pt', max_length=512, truncation=True)
    outputs = generator_model.generate(inputs, max_length=150, num_return_sequences=1)
    return tokenizer.decode(outputs[0], skip_special_tokens=True)

app = Flask(__name__)

@app.route('/query', methods=['POST'])
def query():
    data = request.json
    query = data.get('query', '')
    if not query:
        return jsonify({'error': 'Query is required'}), 400

    retrieved_docs = retriever.search(query, top_k=3)
    response = generate_response(query, retrieved_docs)
    return jsonify({'response': response})

if __name__ == '__main__':
    app.run(debug=True)
Enter fullscreen mode Exit fullscreen mode

Run the server:

python app.py
Enter fullscreen mode Exit fullscreen mode

Step 2: Test the API

Use Postman or curl to send a query:

curl -X POST http://127.0.0.1:5000/query -H "Content-Type: application/json" -d '{"query": "What is the future of AI in healthcare?"}'
Enter fullscreen mode Exit fullscreen mode

Top comments (0)