Forem

Cover image for Deploy Open Source SLMs (Small Language Models) locally - DeepSeek R1 Distilled .
Gorchene Bader
Gorchene Bader

Posted on

Deploy Open Source SLMs (Small Language Models) locally - DeepSeek R1 Distilled .

Building Free AI Agents: Setting Up Ollama for AI Models

This post is part of a series where we break down each step in creating fully free AI agents capable of handling various tasks.

While this may be a common topic, this tutorial will guide you through each step to develop a more comprehensive solution.

For this task, we have two main options:

  1. Using Hugging Face libraries – powerful but requires more setup.
  2. Leveraging Ollama – a more streamlined, user-friendly alternative.

To keep things simple and efficient, we’ll focus on Ollama, which offers an easier setup for running AI models.


Step 1: Download and Install Ollama

To get started, we need to install Ollama on our system.

  1. Visit the Ollama Download Page.
  2. Select your operating system (Windows, macOS, or Linux).
  3. Click Download (for Windows and macOS) or copy the installation command (for Linux users).

Since I’m using Linux, I installed Ollama by running this command in my terminal:

curl -fsSL https://ollama.com/install.sh | sh
Enter fullscreen mode Exit fullscreen mode

Step 2: Explore and Download AI Models

Once Ollama is installed, we need to fetch a model to use.

  1. Browse the available models on the Ollama Models Page.
  2. Search for a specific model or scroll through the list of available options.
  3. Once you find a model you like, grab the command to download and run it.

For this tutorial, I’ll be using deepseek-r1:1.5b, a lightweight SLM (Small Language Model) capable of Chain of Thought (CoT) reasoning.

To download and run it, simply enter the following command in your terminal:

ollama run deepseek-r1:1.5b
Enter fullscreen mode Exit fullscreen mode

Step 3: Running the Model for Text Generation

Now that our model is up and running, we can start using it for text generation. Simply enter some input, and let the AI do the rest!

Image description


What’s Next?

Thank you for following along! If you have any feedback, feel free to share it.

In the next post, we’ll take this further by integrating the model into a Python application, showcasing how to make this SLM truly useful. Stay tuned! 🚀

Top comments (3)

Collapse
 
oussama_mabrouk_2931d748f profile image
Oussama Mabrouk

Can you please make rag tutorial next.

Collapse
 
bunshee profile image
Gorchene Bader

Surely stat tuned 😉

Collapse
 
kaymen99 profile image
Aymen K

Hey! Check out my post on building a local RAG with DeepSeek R1. Hope it helps! 😊