With DeepSeek being the rave of the moment right now in the world of AI, it is no wonder that every developer wants to explore the power and capabilities of their models. However, a limiting factor remains the fact that powerful large language models like DeepSeek's models require heavy compute power (GPUs) in order to run them locally.
Thankfully, cloud providers like Kaggle offer free GPUs at a limited runtime per week to developers so that they can run or train compute-intensive open-source models.
Now let's see the simple steps to run DeepSeek R-1 reasoning model in a Kaggle notebook with the aid of Ollama.
STEP 1: Create a new Kaggle notebook
STEP 2: Go to Setting > Accelerator and select GPU T4 x2. This is a powerful-enough machine.
STEP 3: Install Ollama in your notebook
!curl -fsSL https://ollama.com/install.sh | sh
This downloads and executes the installation script for Ollama.
STEP 4: Install Ollama Python SDK
!pip install ollama
STEP 5: Start Ollama server as a background process
import subprocess
process = subprocess.Popen("ollama serve", shell=True)
STEP 6: Pull DeepSeek R-1 model from Ollama model hub
!ollama run deepseek-r1:1.5b
This will download DeepSeek R-1 1.5 billion parameter model
STEP 7: Chat with the model
import ollama
result = ollama.chat(
model='deepseek-r1:1.5b',
messages=[{
'role': 'user',
'content': 'What are the steps involved in baking a chocolate cake?'
}],
)
print(result.message.content)
By following the above steps, you can easily run open-source models in a Kaggle notebook, including DeepSeek models, using Ollama.
Here is an existing notebook that implements all the above-mentioned steps: https://github.com/Ifeanyi55/OllamaRun/blob/main/ollamarun.ipynb
Top comments (0)