How to Install and Run DeepSeek Locally with Ollama
DeepSeek is a powerful open-source language model, and with Ollama, running it locally becomes effortless. In this guide, I'll walk you through installing Ollama and running DeepSeek-r1:1.5b on your command line.
Step 1: Install Ollama
Ollama provides a simple way to run and manage AI models locally. You can install it using the following commands:
For macOS (via Homebrew)
brew install ollama
For Linux
curl -fsSL https://ollama.ai/install.sh | sh
For Windows (via PowerShell)
iwr -useb https://ollama.ai/install.ps1 | iex
Once installed, restart your terminal or command prompt.
Step 2: Pull DeepSeek Model
Now that Ollama is installed, you need to download the DeepSeek-r1:1.5b model. Run:
ollama pull deepseek-r1:1.5b
This will fetch the model from Ollama’s registry. The first time you run it, the model will be downloaded, so it may take a while depending on your internet speed.
Step 3: Run DeepSeek in Terminal
Once the model is downloaded, you can start an interactive session with it:
ollama run deepseek-r1:1.5b
This will launch the DeepSeek model, allowing you to input prompts and receive AI-generated responses directly from your terminal.
Step 4: Using DeepSeek in a Script
If you want to integrate DeepSeek into a script, you can use Ollama's API:
import ollama
response = ollama.chat("deepseek-r1:1.5b", "Hello, how can you assist me?")
print(response)
Here is AI generated movie for you to chill:
Conclusion
Running DeepSeek locally is now easier than ever with Ollama. Whether you're using it for personal projects, AI research, or development, this guide should help you get started. Let me know if you are still using ChatGPT or trying to over on deepseek!
Happy coding! 🚀
Let connect on LinkedIn and checkout my GitHub repos:
Top comments (0)