AI models like Llama are powerful tools for building conversational applications. In this tutorial, I'll show you how to set up and deploy an AI-powered Telegram bot using Ollama (a self-hosted Llama model) and Node.js, running entirely on a virtual machine (VM) without a GPU.
Youβll learn how to:
- Install and run Llama3.2 model.
- Set up a Node.js application with PM2 for process management.
- Build a Telegram bot that interacts with the self-hosted AI instance. The full code is available in my GitHub repository
π Prerequisites
Make sure you have:
- A virtual machine (or server) running Ubuntu 20.04 or later.
- No GPU? No problem! We'll use a smaller model that works with CPUs.
1οΈβ£ Step 1: Setting Up Ollama on the VM
Ollama is an easy-to-use platform for hosting AI models locally. Follow these steps to set it up.
Install Ollama
curl -fsSL https://ollama.com/install.sh | sh
Verify Installation
Run the following command to ensure Ollama is working:
ollama serve
This starts the Ollama server, exposing the API at http://localhost:11434
.
Load a Model
Since we're using CPU-only, load a lightweight model:
ollama pull llama:3.2:1b
You can explore other models in the Ollama model library
2οΈβ£ Step 2: Setting Up Node.js and PM2
We'll use Node.js to create a backend application that interacts with both Ollama and Telegram.
Install Node.js and PM2
sudo apt update
sudo apt install -y nodejs npm
sudo npm install -g pm2
Initialize the Project
Create a folder for your project and initialize a Node.js application:
mkdir ai-telegram-bot
cd ai-telegram-bot
npm init -y
Install Dependencies
npm install dotenv node-telegram-bot-api
Create the Folder Structure
Organize your project as follows:
π src/
βββ π services/
β βββ π ollamaService.js # Handles interactions with the local Ollama API
β βββ π telegramBotService.js # Telegram bot logic and API integration
3οΈβ£ Step 3: Writing the Code
Create ollamaService.js
This file handles interactions with the Ollama API. The function below sends a string to the API and returns a response, which can be forwarded to the Telegram user.
Refer to the Ollama API Documentation for additional features.
const axios = require("axios");
const apiUrl = "http://localhost:11434/api/generate";
async function queryOllama(message) {
try {
const response = await axios.post(apiUrl, {
model: "llama3.2:1b",
prompt: message,
stream: false,
});
const respData = response.data.response.toString();
return respData;
} catch (error) {
console.error(error);
throw new Error("Failed to query Ollama");
}
}
module.exports = { queryOllama };
Future improvement: Implementing support for additional models like vision-compute models.
Create telegramBotService.js
This file contains the Telegram bot logic. You can find more details in the node-telegram-bot-api documentation
require("dotenv").config();
const TelegramBot = require("node-telegram-bot-api");
const { queryOllama } = require("./ollamaService");
const token = process.env.TELEGRAM_BOT_TOKEN;
const bot = new TelegramBot(token, { polling: true });
bot.on("message", async (msg) => {
const chatId = msg.chat.id;
switch (msg.text) {
case "/start":
bot.sendMessage(
chatId,
"Welcome to Ollama Bot! Send me a message and I will translate it to Ollama language."
);
break;
case "/help":
bot.sendMessage(
chatId,
"Send me a message and I will translate it to Ollama language using the llama3.2 model with 1 billion parameters. I can make mistakes, double check important infos."
);
break;
default:
bot.sendChatAction(chatId, "typing");
try {
const response = await queryOllama(msg.text);
bot.sendMessage(chatId, response);
} catch (error) {
bot.sendMessage(
chatId,
"Oops, something went wrong. Please try again."
);
}
}
});
Future improvement: Adding more commands or integrating additional services.
4οΈβ£ Step 4: Create a Telegram Bot
Create your bot:
- Open Telegram and search for the user BotFather.
- Start a chat with BotFather and send the command
/newbot
. - Follow the instructions, providing a name and username for your bot.
- Once completed, BotFather will send you a token. This token is used to authenticate your bot.
Save this token in your
.env
file:
TELEGRAM_BOT_TOKEN=your_telegram_bot_token
5οΈβ£ Step 5: Running the Bot
Start the Application
Start the bot using PM2 for process management:
pm2 start src/services/telegramBotService.js --name ai-telegram-bot
pm2 save
Check Logs
Ensure the bot is running correctly by checking the logs:
pm2 logs ai-telegram-bot
Now, your bot is live and ready to interact with users on Telegram!
If your bot doesnβt respond, wake it up with this endpoint:
curl --location --request POST 'https://api.telegram.org/bot{your-token-here}/getMe'
6οΈβ£ How It Works
- Users send a message to the Telegram bot.
- The message is forwarded to the Ollama API for processing.
- Ollama generates a response using the Llama model.
- The bot sends the response back to the user in Telegram.
π’ Final Thoughts
Congratulations! You've successfully built and deployed an AI-powered Telegram bot using a self-hosted Llama model.
Feel free to experiment with more advanced models or customize the bot further. If you encounter issues, leave a comment or fork the GitHub repository for more examples and enhancements.
Happy coding! π
Top comments (0)