DEV Community

Lee
Lee

Posted on

Make AI Models Your Perfect Roommate! (ServBay+Ollama+ChatBox)

At the very beginning:

Recently,DeepSeek has become so popular that everyone wants to give it a try. The arrival of DeepSeek can be said to have sparked a nationwide AI craze.
Initially, I thought deploying DeepSeek locally would be very challenging, but after testing, it turned out to be quite simple. The process is as easy as installing new software on your computer, and it takes only about ten minutes to complete the local deployment.
Today, let's talk about how to deploy the DeepSeek-R1-1.5B model locally on your own computer.
Installation Steps Overview:

  1. Install ServBay: Why install it? Because it allows you to one-click install Ollama and one-click install DeepSeek.
  2. Install a GUI that supports Ollama: Is the command-line interface not user-friendly? Don't worry, we've got the perfect GUI prepared for you.
  3. Start seamless conversations with DeepSeek!

Ⅰ. Why Deploy DeepSeek Locally?

Some of you may share the same experience as me—frequently encountering the message "Server busy, please try again later" when using DeepSeek. However, with local deployment, this issue simply doesn’t exist.

Image description

I recommend everyone deploy the DeepSeek-R1-1.5B model. Why this version? Here’s why:

  1. Compact and Efficient :DeepSeek-R1-1.5B is a lightweight model with only 1.5 billion parameters. Sounds “tiny,” right? But don’t underestimate it—this is a “small but mighty” model! It only requires 3GB of VRAM to run, meaning even computers with modest configurations can handle it with ease. Moreover, it performs exceptionally well in mathematical reasoning, even surpassing GPT-4o and Claude 3.5 in some benchmarks. Of course, if your computer has a higher configuration, you can opt for other versions.
  2. Higher Flexibility : Locally deployed large models offer more flexibility as they are usually not limited by external platforms. Users gain complete control over the model's behavior and content.
  3. No Content Censorship : Large models accessed via API calls may face restrictions from the service provider’s content policies (e.g., OpenAI’s ChatGPT limits responses on sensitive topics). However, locally deployed models can be adjusted based on the user's needs, allowing for discussions on a broader range of topics.
  4. Privacy Protection : All data is processed locally without the need for uploading to the cloud, making it suitable for scenarios with high data privacy requirements.
  5. Full Control : Users have complete control over the model’s runtime environment, data inputs and outputs, as well as updates and optimizations of the model.

In summary, local deployment eliminates response failures and significantly enhances user satisfaction. Users with other requirements can refer to the content below for alternative options.

II. Hardware Requirements for Different Versions of DeepSeek

Below are the hardware requirements for different versions of the DeepSeek model. You can choose the version that best matches your computer's configuration.

Image description

Ⅲ. Deployment Steps

1. Download ServBay

Requires macOS 12.0 Monterey or later. Currently, they do not support a Windows version, but according to the official statement, it will be available soon.
Download the latest version of ServBay

Image description

The installation file is only 20MB.

Installation Steps

  • Double-click the downloaded .dmg file.
  • In the opened window, drag the ServBay.app icon into the Applications folder.

Image description

  • When using ServBay for the first time, initialization is required.

One-click selection of Ollama

Image description

Once the installation is complete, open ServBay to begin using it.

Image description

  • Enter the password. Once the installation is complete, you can find ServBay in the Applications directory.
  • Access the main interface. Originally, Ollama required a complex process to install and start services, but with ServBay, you can start with a single click, install the AI models you need, and no longer worry about configuring environment variables. Even ordinary users with no knowledge of development can use it with just one click. One-click start and stop, multi-threaded rapid model downloads, and as long as your macOS system is sufficient, you can run multiple large AI models simultaneously.

Remember Ollama's local path and port number

Image description

DeepSeek One-click download for DeepSeek

Image description

On my computer, the download speed for DeepSeek8B even exceeded 60MB per second, completely surpassing other similar tools.

Image description

With ServBay and Ollama, I was able to deploy DeepSeek locally. Look, it's running smoothly!

Image description

This way, we've achieved local deployment of large models using ServBay + Ollama. However, it's currently limited to the command line, and there's no GUI yet!

2. GUI Recommendation - Chatbox (Personally Tested, Best Option)

Chatbox is easy to use (Free, powerful, supports file uploads, Recommendation rating: 🌟🌟🌟🌟🌟).

Chatbox Download

Image description

After downloading, access the main interface:

Image description

Click on settings and follow the steps to make modifications.

Image description

Save changes, and you can start conversations. You'll see that the modifications have been successfully applied.

Image description

Image description

With this, we've completed the full deployment of the DeepSeek-R1-1.5B model based on ServBay.

Summary:

Friends, isn't it incredibly easy to locally deploy the DeepSeek-R1-1.5B model using ServBay? By following the steps above, it takes only 10 minutes to turn your computer into an "intelligent assistant."

Moreover, this model not only runs efficiently but can also make a big impact in various scenarios. Go ahead and give it a try!

Top comments (0)