DEV Community

Majushree H
Majushree H

Posted on

How to Install Ollama with DeepSeek-r1 and Integrate it with Python on Windows

Table of Contents


Introduction

In the world of AI, conversational models like DeepSeek-r1 by Ollama are revolutionizing natural language processing. This guide will walk you through the process of installing Ollama with DeepSeek-r1 on your Windows machine and integrating it with Python. Whether you're building intelligent applications or exploring advanced AI, this tutorial will help you set up DeepSeek-r1 to enhance your projects with powerful conversational capabilities. Let's get started!

What is DeepSeek r1

DeepSeek-r1 is an advanced AI model developed by Ollama, offering state-of-the-art solutions for natural language processing (NLP). With the power of deep reasoning and problem-solving capabilities, it's perfect for applications such as content generation, chatbots, and AI-driven customer support systems.

Key Features of DeepSeek-r1:

  • Optimized for NLP: DeepSeek-r1 is tailored for chat-based AI tasks, offering seamless natural language understanding.
  • Faster Inference: Optimized for real-time responses, this model is perfect for chatbots and virtual assistants.
  • Higher Accuracy: DeepSeek-r1 delivers refined performance in text generation, making it suitable for human-like conversational applications.
  • Specialized AI Model: Unlike other AI tools, DeepSeek-r1 is designed specifically for language-related tasks.

Use Cases:

  • Chatbots and Virtual Assistants
  • Content Generation
  • Question-Answer Systems
  • Customer Support

Prerequisites

Follow these steps to install Ollama with DeepSeek-r1 on your Windows machine and get it running with Python.


Basic Setup

  1. Download Ollama: Visit the official Ollama website and download the software.
  2. Download DeepSeek-r1: Go to the Ollama website and download the DeepSeek-r1 model, ensuring it’s compatible with your system.

Command Line Setup

Open your command line interface (Command Prompt or PowerShell) and run the following command to pull the DeepSeek-r1 model:

ollama pull deepseek-r1 --version

Enter fullscreen mode Exit fullscreen mode

Verify Installation

Run the following command to confirm that the installation was successful:

ollama list

Enter fullscreen mode Exit fullscreen mode

If everything is set up correctly, the list of installed models, including DeepSeek-r1, will be displayed.


Test the Model

To test if DeepSeek-r1 is working as expected, run the following command:

ollama run deepseek-r1

Enter fullscreen mode Exit fullscreen mode

You can interact with the model by asking questions like "How are you?". To exit, simply type 'bye' or press Ctrl+Z.


Python Integration Setup

Now let's set up Python to interact with Ollama.

  1. Create a directory for your project:

    cd/
    mkdir testDeep
    cd testDeep
    
    
  2. Verify your Python version:

    python --version
    
    
  3. Create a virtual environment:

    python -m venv env1
    
    
  4. Activate the virtual environment:

    env1\Scripts\activate.bat
    
    
  5. Install the Ollama package:

    pip install ollama
    
    
  6. Optionally, open your preferred editor with:

    code .//for VS code editor
    
    

Launching Python Editors from Command Prompt

To open Python editors directly from the command line:

  • For IDLE: Type idle or python -m idlelib
  • For PyCharm: Type pycharm (if installed in your system's PATH)
  • For Jupyter Notebook: Type jupyter notebook
  • For Spyder: Type spyder

Python Implementation

Here's a simple Python script to interact with DeepSeek-r1:

import ollama

# Initialize conversation with the model
response = ollama.chat(model='deepseek-r1',
    messages=[{
        'role': 'user',
        'content': 'Hello, who are you?'
    }])

# Print the response
print(response['message']['content'])

# Continue conversation
while True:
    user_input = input("You: ")
    if user_input.lower() == 'exit':
        break

    response = ollama.chat(model='deepseek-r1',
        messages=[{
            'role': 'user',
            'content': user_input
        }])

    print("Assistant:", response['message']['content'])

Enter fullscreen mode Exit fullscreen mode

Running the Code in a Virtual Environment

To execute your code in the virtual environment:

  1. Open Visual Studio Code.
  2. Press Ctrl+Shift+P and select Python: Select Interpreter.
  3. Choose the env1 virtual environment to run your code.
  4. Click Run.

You can monitor your GPU performance using Task Manager.


Conclusion

You've successfully installed Ollama with DeepSeek-r1 on your Windows and integrated it with Python. Whether you're working on an AI-powered project or exploring conversational AI, this setup provides you with a solid foundation to create intelligent applications.


Top comments (0)