DEV Community

Avnish
Avnish

Posted on

How to Install DeepSeek-R1 32B on Windows: System Requirements, Docker, Ollama, and WebUI Setup

DeepSeek-R1 32B System Requirements Docker, Ollama, and WebUI Setup

Component Minimum Requirement Recommended Requirement
GPU NVIDIA RTX 3090 (24GB VRAM) NVIDIA RTX 4090 / A100 (40GB+ VRAM)
CPU 8-core processor (Intel i7 / AMD Ryzen 7) 16-core processor (Intel i9 / AMD Ryzen 9)
RAM 32GB 64GB+
Storage 100GB SSD 1TB NVMe SSD
OS Windows 10/11 Windows 11
Docker Support WSL2 enabled WSL2 enabled

Installation Methods

I’ll provide three different installation methods:

  1. Using Docker (For easy containerized deployment)
  2. Using Ollama (For a simplified local installation)
  3. Using WebUI (For an interactive browser-based experience)

1️⃣ Installing DeepSeek-R1 32B Using Docker

Step 1: Install Docker

  1. Download Docker Desktop for Windows.
  2. Run the installer and enable WSL2 Backend during setup.
  3. Restart your system and verify installation with:
   docker --version
Enter fullscreen mode Exit fullscreen mode

Step 2: Pull DeepSeek-R1 32B Docker Image

Run the following command to download the DeepSeek-R1 32B image:

docker pull deepseek/deepseek-r1:32b
Enter fullscreen mode Exit fullscreen mode

Step 3: Run the Docker Container

Start the DeepSeek model in a container:

docker run -d --gpus all -p 8080:8080 --name deepseek-r1-32b deepseek/deepseek-r1:32b
Enter fullscreen mode Exit fullscreen mode
  • -d → Runs in detached mode
  • --gpus all → Allocates all available GPUs
  • -p 8080:8080 → Maps container port to the local machine

Step 4: Access the WebUI

  1. Open your browser and go to:
   http://localhost:8080
Enter fullscreen mode Exit fullscreen mode
  1. Start interacting with DeepSeek-R1 32B!

2️⃣ Installing DeepSeek-R1 32B Using Ollama

Ollama is an alternative that simplifies running LLMs locally.

Step 1: Install Ollama

  1. Download Ollama for Windows
  2. Run the installer and follow the setup instructions.

Step 2: Install DeepSeek-R1 32B Model

Once Ollama is installed, run the following command:

ollama pull deepseek-r1:32b
Enter fullscreen mode Exit fullscreen mode

This will download and prepare the model.

Step 3: Run DeepSeek-R1 Locally

Start DeepSeek-R1 in interactive mode:

ollama run deepseek-r1:32b
Enter fullscreen mode Exit fullscreen mode

Now, you can chat with DeepSeek directly from the terminal.


3️⃣ Running DeepSeek-R1 with a WebUI

If you prefer an interactive WebUI, follow these additional steps.

Step 1: Install a WebUI (text-generation-webui)

Clone and install a text-generation-webui:

git clone https://github.com/oobabooga/text-generation-webui.git
cd text-generation-webui
pip install -r requirements.txt
Enter fullscreen mode Exit fullscreen mode

Step 2: Start the WebUI with DeepSeek

Launch the WebUI and specify the DeepSeek model:

python server.py --model deepseek-r1:32b
Enter fullscreen mode Exit fullscreen mode

Now, you can access DeepSeek-R1 via WebUI at:

http://localhost:5000
Enter fullscreen mode Exit fullscreen mode

Which Method Should You Choose?

Method Best For Ease of Setup
Docker Running in an isolated container ⭐⭐⭐
Ollama Quick setup and local execution ⭐⭐⭐⭐⭐
WebUI Browser-based interaction ⭐⭐⭐⭐

If you have a powerful GPU, Docker or Ollama are great choices.

If you prefer browser access, go for the WebUI setup.

Top comments (0)