DEV Community

Cover image for Run DeepSeek-R1 Locally with Ollama and Anything LL
Muhammad Jahangir
Muhammad Jahangir

Posted on

Run DeepSeek-R1 Locally with Ollama and Anything LL

Artificial Intelligence is evolving fast, and DeepSeek-R1 is one of the latest models making waves in the AI community. Developed by the Chinese AI company DeepSeek, this model is being compared to some of the best from OpenAI. But what makes DeepSeek-R1 so exciting? Well, it’s not just powerful—it’s open-source, meaning you can run it on your own machine without spending a dime! In this guide, I’ll show you how to set up DeepSeek-R1 locally using Ollama in just a few minutes.

Why DeepSeek R1?

Image description

DeepSeek-R1 isn’t just another AI model but a game changer. It’s affordable, powerful, and designed to tackle complex problems with ease. Whether you’re coding, troubleshooting, or exploring AI-driven solutions, DeepSeek-R1 delivers outstanding reasoning and problem-solving capabilities. Thanks to its built-in chain-of-thought reasoning, it can think through problems logically, making it a serious competitor to other top models such as OpenAI and Gemini. Now, let’s jump into how you can get it running on your local machine.

Getting started with Ollama

Image description

Before you can run DeepSeek-R1, you need Ollama. This tool lets you run AI models locally, providing an easy way to interact with them. Setting it up takes just a minute.

Here’s how to get started:

Step 1: Install Ollama

Image description

First, head over to the Ollama website and download the version that fits your operating system.For this guide, I will be using the Apple version with an Intel chip. Once the download is complete, follow the on-screen instructions to finish the installation—it’s quick and easy!

Step 2: Download DeepSeek-R1

Image description

When you visit the Ollama website, you’ll notice that you can run different versions of DeepSeek-R1. The system requirements are listed here, as shown in the screenshot above.

You can choose from versions like 1.5b, 7b, 8b, 14b, 32b, 70b, and 671b. Keep in mind that the hardware requirements grow as you select the larger versions. For this guide, I will be using the 1.5b version.

After installing Ollama, open your terminal and use the following command to download the DeepSeek-R1 model:

ollama run deepseek-r1
Enter fullscreen mode Exit fullscreen mode

Step 3: Verify Installation

After downloading, verify the installation by running:

ollama list
Enter fullscreen mode Exit fullscreen mode

You should see deepseek-r1 in the list of available models. If you do, great job! You're ready to run the model.

Step 4: Run DeepSeek-R1
Now, let's start the model using the command:

ollama run deepseek-r1
Enter fullscreen mode Exit fullscreen mode

and just like that, you're interacting with DeepSeek-R1 locally. It's that simple!

Step 4:Download Anything LLM

Image description

Anything LLM is a versatile user interface that facilitates seamless interaction with various AI-driven applications. In this guide, I’ll be using Anything LLM as my tool of choice to interact with the DeepSeek-R1 model. Once installed, you can search for or select DeepSeek-R1 as your preferred LLM for testing and get started!

Step 5: Enter Query

How many planets in solar system? please list them
Enter fullscreen mode Exit fullscreen mode

Image description

Conclusion

Running LLMs locally with AnythingLLM is a powerful approach for developers who prioritize privacy, performance, and cost efficiency. By setting up DeepSeek-R1 on your local machine, you can experiment with AI models without relying on external APIs. Whether you're a researcher, developer, or enthusiast, this setup provides a flexible and efficient way to interact with LLMs.

Stay tuned for more insights into deploying and optimizing LLMs for local development!

Top comments (0)