Running powerful AI models locally on your PC might sound daunting, but it's actually quite straightforward. In this tutorial, I'll walk you through installing DeepSeek-1 on your Windows machine in about ten minutes. The best part? You won't need expensive hardware or a high-end GPU that costs more than your mortgage.
Hardware Requirements
Before we dive in, let's talk about what you'll need. I'm using a Mini Forum MS-A1 with an AMD Ryzen 7 CPU and 32GB of RAM for this tutorial. While these specs are comfortable, you can run DeepSeek-1 on more modest hardware. The key is finding the right balance between model size and your system's capabilities, which we'll discuss later.
Setting Up Your Windows Environment
The first step is preparing your Windows system for running AI models locally. Let's start by enabling the necessary Windows features:
- Press the Windows key and search for Windows features
- In the Windows Features dialog, locate and check the box next to Virtual Machine Platform
- Click OK and restart your computer when prompted
Installing Required Software
Ollama Installation
Ollama serves as our model manager and is essential for running DeepSeek-1. Here's how to set it up:
- Visit ollama.com and click the download button
- Select the Windows version (Mac and Linux versions are also available)
- Run the installer once downloaded
Note: Don't be concerned when you don't see a graphical interface after installation - this is normal as Ollama runs in the background.
Docker Desktop Setup
Docker Desktop is crucial for running our web interface. Follow these steps:
- Go to docs.docker.com/get-started
- Click Get Docker **and select Docker Desktop for Windows**
- Download the x86_64 version
- Run the installer and restart your computer
When you log back in, you might see two important prompts:
- A command prompt asking you to update WSL (Windows Subsystem for Linux)
- The Docker Desktop setup wizard
Complete both processes to ensure proper configuration.
Installing Open Web UI
Open Web UI provides our graphical interface for interacting with DeepSeek-1. Here's how to set it up:
- Visit docs.openwebui.com
- Navigate to Getting Started → Quick Start
- Open Command Prompt (Windows key + R, type cmd, press Enter)
- Copy and paste the provided container command
- For systems with NVIDIA GPUs, use the GPU-enabled command
- For other systems, use the standard command
- Wait for the container to download and start
Once running, you can access the interface through Docker Desktop by clicking on the ports tab, which will open the UI in your browser. You'll notice it has a familiar chat interface, similar to ChatGPT.
Adding DeepSeek-r1 Model
Now for the exciting part - installing the actual AI model:
- Go back to ollama.com and click on Models
- Find DeepSeek in the list
- Choose your model size:
- Model names include parameter counts (e.g., deepseek-r1:7b for 7 billion parameters)
- Larger models offer better performance but require more resources
- Start with deepseek-r1:7b for a good balance
- Copy the model name (e.g., deepseek-r1:7b)
- In the Open Web UI interface, click the model selection dropdown
- Paste the model name and initiate the download
Using Your Local DeepSeek-r1
After the model downloads and verifies, you can start using it completely offline. All processing happens on your local machine, ensuring your data stays private. You can even download multiple models and run them simultaneously to compare their outputs.
Remember, this is just the beginning. You can experiment with different model sizes and configurations to find what works best for your specific hardware and needs.
Do you feel lazy about filling out job forms and uploading resumes? Apply to 100+ LinkedIn & Indeed jobs in minutes with FastApply.
Top comments (0)