DEV Community

Cover image for Using Deepseek r1 in Visual Studio Code for free.
Varnit Sharma
Varnit Sharma

Posted on

Using Deepseek r1 in Visual Studio Code for free.

With the rapid advancement in Large Language Models (LLMs), Deepseek-r1 has emerged as a top contender. Matching the performance of the GPT-o1 model in reasoning and code generation, Deepseek-r1 leverages transformers and logical tree reasoning to optimize decision-making, setting a new benchmark in AI-powered development tools.

Having used tools like Cursor and other paid code helpers, I was curious to see how Deepseek-r1 would perform in my workflow. This sparked my journey to integrate it into my environment and evaluate its capabilities.

Let’s Get Started 🚀

Step 1: Install Ollama

To get started, you’ll need Ollama, a platform that allows you to run LLMs locally. If you don’t already have it installed, download it from Ollama's official site. Follow the setup instructions to get it running smoothly on your machine.

Step 2: Download Deepseek-r1

Once Ollama is up and running, it’s time to pull the Deepseek-r1 model to your local environment. Run the following command in your terminal:
$ ollama pull deepseek-r1

After downloading, you can test if the model is working correctly by running a simple curl command:
$ curl http://localhost:11434/api/generate -d '{
"model": "deepseek-r1:latest",
"prompt": "Why is the sky blue?"
}'

If the model is successfully set up, you’ll see a chunked output in your terminal—indicating that Deepseek-r1 is ready for action.

Step 3: Install the Continue.dev Extension

With Visual Studio Code as a prerequisite, head to the Extensions Marketplace and install the Continue.dev extension. This extension bridges the gap between VS Code and advanced AI models, enabling seamless integration for coding assistance.

Once installed, you're one step closer to unleashing the power of Deepseek-r1!

Step 4: Configure Deepseek-r1 in Continue.dev

Open Continue.dev by clicking on its icon in the Activity Bar of VS Code.

In the chat window, locate the model selection button at the bottom-left corner.

Button Location

Click on the button to open a dropdown menu.

Model Selection

From the menu, select Ollama as the platform.

Once Ollama is selected, you’ll see all available models in your local environment. Select Deepseek-r1 from the list.

At this point, Continue.dev is configured to use Deepseek-r1 via Ollama.

Ready to Code!

With everything set up, you can now take full advantage of Deepseek-r1's advanced reasoning and code-generation capabilities directly within VS Code. Continue.dev enables additional context-aware features, enhancing the development experience:

Autocomplete: Get precise suggestions for completing lines or blocks of code.
Code Refactoring: Highlight sections of code and ask for optimizations or rewrites.
Code Explanations: Select any code snippet and let the AI break it down for you.
Here’s a quick look at what the workflow will look like:

Final Execution

Why Deepseek-r1?

Reasoning Excellence: Logical tree reasoning ensures smarter, more optimized decisions.
Transformer Power: Delivers robust performance in code generation tasks.
Local Execution: Run models locally for better privacy and faster responses.
By integrating Deepseek-r1 with Continue.dev, you unlock a powerful development workflow tailored to your specific needs.

Cheers to smarter coding! 🥂

Top comments (59)

Collapse
 
yaireo profile image
Yair Even Or

DeepSeek cut-off date is end of 2023, which is ancient compared to Gemini 2.0 (August 2024).

It is imperative to have a recent cut-off date for front-end related work or when querring anything about recent things.

This is why I will NOT use it.

Collapse
 
rsteadman profile image
R Steadman

On the other hand, Gemini is rubbish, so there's that.

Collapse
 
amjadmh73 profile image
Amjad Abujamous

Agreed, based on personal experience.

Collapse
 
yaireo profile image
Yair Even Or

Rubbish?? its on the very TOP of the leaderboard:

lmarena.ai/?leaderboard

Thread Thread
 
rsteadman profile image
R Steadman

It could be on top of every leaderboard in existence, it's still inferior for coding tasks than say Claude (and since this is dev.to, that's the relevant bit).

Thread Thread
 
yaireo profile image
Yair Even Or • Edited

It's on the top for coding in the world's most famous AI leaderboard website. it has been ranked so by at least thousands of developers, so this contradicts your one and single personal experience completely.

Image description

Thread Thread
 
rsteadman profile image
R Steadman

Whatever you say buddy.

Collapse
 
reidlai profile image
Reid Lai

Gemini is BERT which is bidirectional model not best for Q&A usage like other decoder only model, e.g. GPT , llama , etc

Thread Thread
 
yaireo profile image
Yair Even Or • Edited

So how do you explain thousands of people had rated it so highly (for coding)?
I don't think they are all Google workers who are trying to boost their own product by rating it.

Collapse
 
tr11 profile image
Tiago Rangel

This will not run Deepseek r1, but rather Deepseek coder. Even if you wanted to run Deepseek r1, you would need a lot of processing power — r1 can't just run on a laptop or PC.

Collapse
 
best_codes profile image
Best Codes

r1 comes with a lot of variants:

Image description

The 1.5b variant could very easily run on a weak or older device, and the 8b variant works fine on my device.

Collapse
 
elliot_brenya profile image
Elliot Brenya sarfo

I am running deepseek-r1 on my old intel macbook

Collapse
 
syeo66 profile image
Red Ochsenbein (he/him)

Which variant? I doubt the full 671b variant works on a macbook. And everything under 32b is really only useful with fine tuning for specialized tasks.

Thread Thread
 
nguyn_anhnguyn_f7d9e52 profile image
Nguyễn Anh Nguyên

Yes

Collapse
 
pravinjadhav profile image
Pravin Jadhav • Edited

for windows follow ;


: Setting Up Deepseek-r1 (Using Windows CMD)

  1. Install Ollama

    • Download Ollama from the official website.
    • Run the downloaded file and complete the setup.
  2. Download the Deepseek-r1 Model

    • Open CMD and run this command:
     ollama pull deepseek-r1
    
  • To test if the model is working, use this command:

     curl http://localhost:11434/api/generate -d "{\"model\": \"deepseek-r1:latest\", \"prompt\": \"Why is the sky blue?\"}"
    
  • If you see output in the terminal, Deepseek-r1 is ready to go!


** Setting Up the Continue.dev Extension**

  1. Install Visual Studio Code (VS Code)

  2. Install the Continue.dev Extension

    • Open VS Code and go to the Extensions Marketplace.
    • Search for "Continue.dev" and install it.
  3. Connect Deepseek-r1 to Continue.dev

    • Open the Continue.dev extension.
    • Click the model selection button at the bottom-left corner.
    • Select "Ollama" and choose the "Deepseek-r1" model.

: Using Deepseek-r1

Now you can use Deepseek-r1’s features directly in VS Code:

  • Autocomplete: Get smart suggestions while writing code.
  • Code Refactoring: Ask the AI to optimize or rewrite your code.
  • Code Explanations: Understand what your code does with AI help.

: Why Choose Deepseek-r1?

  • Logical Reasoning: Makes smarter decisions using logical tree reasoning.
  • Transformer Technology: Excels in code generation tasks.
  • Local Execution: Runs on your machine for better privacy and faster responses.

: WhatsApp Status Message

"Deepseek-r1: Smarter than GPT, excelling in reasoning and code generation. 🚀

DeepseekR1 #AI #Coding #WindowsCMD"


Start using Deepseek-r1 in your workflow and enjoy smarter, faster coding! 🥂
(hashnode.com/@askpravinjadhav)

Collapse
 
leob profile image
leob

Most obvious question: why would you use this instead of CoPilot? or am I comparing apples and oranges ;-)

Collapse
 
syeo66 profile image
Red Ochsenbein (he/him)

Maybe to not send data to a company you can or cannot trust

Collapse
 
leob profile image
leob

"A company you cannot trust" - DeepSeek is Chinese, does your "data" go straight to the Communist Party? ;-)

Thread Thread
 
syeo66 profile image
Red Ochsenbein (he/him)

Not if you run their models locally...

Collapse
 
yaireo profile image
Yair Even Or

There was a better guide a few days ago already:

dev.to/shayy/run-deepseek-locally-...

Collapse
 
keyru_nasirusman profile image
keyru Nasir Usman • Edited

What I don't understand is that how come a small startup is able to build LLM model that beats chatgpt? Even Tech giants like Google or Elon Musk didn't build LLM model that can beat chatgpt. All of a sudden a small Chinese company comes up with LLM model superior than OpenAI's chatgpt. Did they use a LLM recipe which OpenAI doesnot know?Guys if there is something i missed here, please enlighten me😊

Collapse
 
syeo66 profile image
Red Ochsenbein (he/him)

This is what you get when enthusiastic smart people try a different angle instead of just running for investors money. Yes, deepseek R1 is trained differently. They used a rule/heuristics based reward system and automated benchmarking during the training process... nothing new, but apparently nobody else thought of combining this with LLM training.

Collapse
 
ai_joddd profile image
Vinayak Mishra

Hey Varnit, had a question for you after seeing this. How good is deepseek in terms of hallucinations, as yesterday night I was reading on LLM hallucination detection.

Collapse
 
gamelord2011 profile image
Reid Burton

Don't you have to have a pc of decent memory & processing capibilities to do this?

Collapse
 
lunaticprogrammer profile image
Varnit Sharma

not really there a variants with parameters they are trained on below 16B params most of the domestic use machines can handle

Collapse
 
gamelord2011 profile image
Reid Burton

Huh... I did not know that.

Collapse
 
muhammad_ulbakir_caf0e profile image
Muhammad .ul. Bakir

gff12

Collapse
 
muhammad_ulbakir_caf0e profile image
Muhammad .ul. Bakir

gffg