DEV Community

Cover image for Don't use DeepSeek R1 before this!
Vaibhav sisodiya
Vaibhav sisodiya

Posted on

Don't use DeepSeek R1 before this!

Unless you've just woken up from a coma or have sworn off technology entirely (which, let’s be honest, would be impressive if you’re reading this), you’ve probably heard about the AI model that’s shaking up the industry—R1 by Deepseek.

R1 has left American AI companies in the dust, effortlessly outperforming OpenAI’s so-called “best” model, O1, which comes with a hefty $200/month price tag. The real kicker? R1 is completely free and open-source.

But before you jump on the hype train, let’s take a moment to talk about the potential risks of using R1.


🚨 Why You Should Avoid Deepseek.com

Yes, you read that right. Do not use Deepseek.com.

While this is the official website for accessing R1, the problem lies in the fact that Deepseek AI originates from China. If history has taught us anything about China's tech policies, it’s that privacy isn’t exactly their strong suit.

Upon inspecting Deepseek.com, it's evident that the site meticulously tracks every action you perform. Here’s a screenshot showing just how aggressively they monitor user behavior:

Tracking Screenshot

If you value data privacy and security, using their website might not be the best idea.


✅ The Better Alternative: Self-Hosting R1

So, what’s the best way to use R1 without compromising your privacy?

There are two safer alternatives:

  1. Use Their API – This is a viable option, but it involves recurring costs and isn’t very scalable.
  2. Host the Model Yourself – This is the best solution if you want full control and privacy.

To run R1 on your local machine (or a cloud server), you can use Ollama—a tool that lets you download and run various AI models effortlessly.

You can find all Deepseek AI models on Ollama, including R1. Here’s the link to the R1 model:
👉 https://ollama.com/library/deepseek-r1

📌 Choosing the Right Model Size

R1 comes in multiple variants, ranging from 1.5B to 70B parameters. Here’s a breakdown:

Model Sizes

If you're unsure where to start, the 8B model is a great middle ground between performance and resource consumption.

🖥️ GUI Options for the Non-Terminal Folks

If you’re not comfortable with command-line tools and prefer a more visual approach, there are GUI wrappers for Ollama like OpenWebUI. These make running AI models as easy as clicking a button.

💻 Integrating R1 into Your Own Application

For developers, Ollama also offers an npm package that allows seamless integration of R1 into web apps:
👉 https://www.npmjs.com/package/ollama


Final Thoughts 💭

R1 is an impressive leap in AI technology, but using it through Deepseek.com might not be the best move if you care about your data privacy. Self-hosting the model via Ollama ensures both security and flexibility.

What are your thoughts on R1? Let’s discuss in the comments!

Top comments (0)