DEV Community

Dheeraj Malhotra
Dheeraj Malhotra

Posted on

Unlocking Local AI Magic: DeepSeek and CodeGPT Empower Developers to Evolve Efficiently

With the rapid development of artificial intelligence technology, more and more developers are looking to integrate AI into their workflows. However, many cloud-based AI services may pose issues such as privacy and data security risks, high usage costs, reliance on internet connectivity, and limited customization options.Therefore, installing and running DeepSeek locally allows developers to harness the powerful capabilities of AI to enhance development efficiency while safeguarding privacy.

CodeGPT is an AI tool based on GPT technology, specifically designed for software developers. It can assist with tasks such as code generation, optimization, debugging, documentation creation, and provide precise suggestions based on context. By integrating CodeGPT with DeepSeek, you can achieve efficient AI-assisted development in a local environment without depending on external cloud services.

Below is a step-by-step guide to help you install and run DeepSeek locally and configure CodeGPT to enhance your development workflow:

Image description

Step 1: Install Ollama and CodeGPT in VSCode

To run DeepSeek locally, we first need to install Ollama, which allows us to run large language models (LLMs) on our machine, and CodeGPT, a VSCode extension that integrates these models to provide coding assistance. Using Ollama directly can be inconvenient due to issues such as having to use the command line to download models, unstable download speeds, and so on. Therefore, I decided to use a new integration tool called ServBay.

Introduction to ServBay

Ollama is a lightweight platform that makes running local LLMs simple. ServBay, on the other hand, is a more user-friendly integration tool for Ollama. It provides an intuitive graphical interface and one-click installation. ServBay is a comprehensive and graphical local web development environment designed for web developers, Python developers, AI developers, and PHP developers, particularly suitable for macOS.It incorporates a suite of commonly used web development tools and software, including web servers, databases, development languages, mail servers, queue services, and more, aiming to provide developers with a convenient, efficient, and unified development environment. Below is the list of tools and software packages currently supported by ServBay.

Download ServBay

  • Visit the official website: ServBay

Image description

  • Download the installer for macOS. Currently, only macOS is supported.
  • During installation, you can automatically select the option to install Ollama.

Image description

  • Once you enter the interface, you can directly install DeepSeek.

Image description

Install CodeGPT in Visual Studio Code

  • Open VSCode and navigate to the Extensions Marketplace (on macOS, press Ctrl + Shift + X or Cmd + Shift + X).
  • Search for β€œCodeGPT” and click Install.

Image description

After installing Ollama and CodeGPT, you're now ready to download and configure DeepSeek to start coding with AI locally. πŸš€

Step 2: Preparing the Models

Now that you have successfully installed ServBay and CodeGPT, it's time to download the models you'll use locally.

  • Chat Model: deepseek-r1: 1.5b, optimized for smaller environments, capable of running smoothly on most computers.
  • Autocompletion Model: deepseek-coder: 1.3b, which utilizes Fill-In-The-Middle (FIM) technology. This allows it to make intelligent autocompletion suggestions as you write code, predicting and suggesting the intermediate portions of a function or methodβ€”not just the beginning or end.

Download the Chat Model (deepseek-r1: 1.5b)

Follow the steps in the interface and simply click to download.

Image description

To Start Using the Chat Model:

  1. Open CodeGPT in VSCode.
  2. Navigate to the Local LLMs section in the sidebar.

Image description

  1. From the available options, select Ollama as the local LLM provider.
  2. Select the model deepseek-r1: 1.5b. Now, you can effortlessly query the model about your code. Simply highlight any code in the editor, add extra files to the query using the # symbol, and leverage powerful command shortcuts, such as:

Image description

  1. /fix β€” Used to fix errors in the code or suggest improvements.
  2. /refactor β€” Used to clean up and improve the structure of the code.
  3. /Explain β€” Get a detailed explanation of any code snippet. This chat model is perfect for assisting with specific problems or receiving suggestions about your code.

Download the Autocompletion Model (deepseek-coder: latest)

Image description

From the list of available models, select deepseek-coder: latest.

Once selected, you can start coding. As you type, the model will begin providing real-time code suggestions, helping you effortlessly complete functions, methods, or even entire code blocks.

Step 3: Enjoy Seamless, Local, and Private AI-Driven Coding

After setting up the models, you can now fully enjoy the benefits of these powerful tools without relying on external APIs. By running everything locally on your computer, you ensure complete privacy and control over your coding environment. Rest assured, with no data leaving your computer, everything remains secure and private.πŸ‘

Image description

Image description

If you enjoyed this article πŸ‘ πŸ‘ πŸ‘, please give it a clap.

Top comments (4)

Collapse
 
dheeraj-lee27 profile image
Dheeraj Malhotra

Hope this article can help you!

Collapse
 
roy1003 profile image
Anika Roy

Amazing software, works perfectly!

Collapse
 
gupta1001 profile image
Aditya Gupta

This implementation is very interesting. I have a small question: Can we use the fine-tuned model for question answering?

Collapse
 
roy1003 profile image
Anika Roy

It is not possible right now, because if we can fine-tured model, we have to run python, yes?