DEV Community

Cover image for How to Add and Use Deepseek-r1 in Your Visual Studio Code (For Free!)
Clinton Ekekenta for FastApply

Posted on • Originally published at blog.fastapply.co

How to Add and Use Deepseek-r1 in Your Visual Studio Code (For Free!)

The AI revolution is happening, and Deepseek-r1 is at the forefront. This powerful Large Language Model (LLM) goes head-to-head with top AI models like GPT, excelling in reasoning, coding, and problem-solving, all while running right on your own machine. No more relying on expensive, cloud-based tools. With Deepseek-r1, you get a fast, private, and cost-effective coding assistant that’s always available when you need it.
After countless hours with tools like Cursor and other paid AI helpers, I decided to give Deepseek-r1 a shot. I discovered a game-changer: a seamless, free integration with Visual Studio Code that supercharged my workflow. Ready to dive in? Let me show you how to set it up step by step.

Why Deepseek-r1?

Before we jump into the setup, let's look at why you should consider using Deepseek-r1 as a developer:

  • You can run everything locally on your computer without a cloud provider.
  • It helps you solve complex coding tasks faster and smarter.
  • It performs well in code generation and debugging.

Installing Deepseek-r1 in VS Code?

Let's proceed to install Deepseek-r1 in your Virtual Studio Code coding environment. To do that, follow the steps below:

Step 1: Install Ollama

To get started, you’ll need Ollama, a lightweight platform that lets you run LLMs locally. Ollama is the backbone of your Deepseek-r1 setup because it will enable you to manage and run Deepseek-r1 effortlessly on your computer.

To install Ollama, head over to Ollama’s official website and download the version for your OS. Then their installation instructions to get it up and running.
Downloading Ollama online

Step 2: Download Deepseek-r1

With Ollama installed, it’s time to bring Deepseek-r1 into your coding environment. Open your terminal and run below:

ollama pull deepseek-r1
Enter fullscreen mode Exit fullscreen mode

This may take some time to install, but you must exercise some patience.

The above command will download the Deepseek-r1 model to your local computer.

Installing Deepseek-r1 locally on your computer

Once the download is complete, test it with a simple query to ensure everything’s working. Run the deepseek-r1 with the command:

ollama run deepseek-r1
Enter fullscreen mode Exit fullscreen mode

And add your test prompt:

Installing Continue.dev on VS extension

If you see a response, you’re all set! Deepseek-r1 is ready to roll.

Step 3: Install the Continue.dev Extension

Now, let’s bring Deepseek-r1 into Visual Studio Code. For this, we’ll use Continue.dev, a fantastic extension that connects VS Code to LLMs like Deepseek-r1. This extension will act as a bridge between your VS Code and Ollama, allowing you to interact with Deepseek-r1 directly within your coding environment. To install Continue.dev Extension, follow the steps:

  1. Open VS Code and go to the Extensions Marketplace.
  2. Search for Continue.dev and hit install.

Installing Continue.dev on VS extension

Step 4: Configure Deepseek-r1 in Continue.dev

With Continue.dev installed, it’s time to connect it to Deepseek-r1. Follow the steps to configure it:

  • Open the Continue.dev interface by clicking its icon in the VS Code Activity Bar.
  • Look for the model selection button at the bottom-left corner of the chat window.
  • Click the button, select Ollama as the platform, and then choose Deepseek-r1 from the list of available models.

Configuring Deepseek on VS Code

That’s it! You’re now ready to harness the power of Deepseek-r1 in your coding workflow.

What Can You Do with Deepseek-r1?

Once everything is set up, the possibilities of Deepseek-r1 are endless in your coding environment:

  • It gives you intelligent, context-aware suggestions as you type.

  • You can highlight a code block and ask Deepseek-r1 to optimize or rewrite it.

  • if you are stuck on an error, Deepseek-r1 will help you troubleshoot it.

  • You can select any snippet and get a detailed breakdown of how it works.

Here’s a quick demo of how Deepseek-r1 works in your Virtual Studio Code.

The most interesting part is that you:

  • You don't need subscriptions or hidden fees, just free and powerful AI assistance.
  • Everything runs locally, so your code stays on your machine.
  • You can tailor Deepseek-r1’s behavior to fit your specific needs.

Final Thoughts

Integrating Deepseek-r1 into Visual Studio Code has been a game-changer for my productivity. It’s fast, reliable, and incredibly versatile all without me having to spend a dive. Whether you’re a seasoned developer or just starting, this setup is worth exploring.

So, what are you waiting for? Give Deepseek-r1 a try and experience the future of coding today.

Happy coding! 🚀

Top comments (1)

Collapse
 
__db20f81acdd64 profile image
Артем Гасин

In the config you have to manually update model version from "deepseek-7b" by default to "deepseek-r1", then it will work