Building Free AI Agents: Setting Up Ollama for AI Models
This post is part of a series where we break down each step in creating fully free AI agents capable of handling various tasks.
While this may be a common topic, this tutorial will guide you through each step to develop a more comprehensive solution.
For this task, we have two main options:
- Using Hugging Face libraries – powerful but requires more setup.
- Leveraging Ollama – a more streamlined, user-friendly alternative.
To keep things simple and efficient, we’ll focus on Ollama, which offers an easier setup for running AI models.
Step 1: Download and Install Ollama
To get started, we need to install Ollama on our system.
- Visit the Ollama Download Page.
- Select your operating system (Windows, macOS, or Linux).
- Click Download (for Windows and macOS) or copy the installation command (for Linux users).
Since I’m using Linux, I installed Ollama by running this command in my terminal:
curl -fsSL https://ollama.com/install.sh | sh
Step 2: Explore and Download AI Models
Once Ollama is installed, we need to fetch a model to use.
- Browse the available models on the Ollama Models Page.
- Search for a specific model or scroll through the list of available options.
- Once you find a model you like, grab the command to download and run it.
For this tutorial, I’ll be using deepseek-r1:1.5b, a lightweight SLM (Small Language Model) capable of Chain of Thought (CoT) reasoning.
To download and run it, simply enter the following command in your terminal:
ollama run deepseek-r1:1.5b
Step 3: Running the Model for Text Generation
Now that our model is up and running, we can start using it for text generation. Simply enter some input, and let the AI do the rest!
What’s Next?
Thank you for following along! If you have any feedback, feel free to share it.
In the next post, we’ll take this further by integrating the model into a Python application, showcasing how to make this SLM truly useful. Stay tuned! 🚀
Top comments (3)
Can you please make rag tutorial next.
Surely stat tuned 😉
Hey! Check out my post on building a local RAG with DeepSeek R1. Hope it helps! 😊