DEV Community

Cover image for How to Create an AI Companion Telegram Bot
Rodrigo Manozzo
Rodrigo Manozzo

Posted on

How to Create an AI Companion Telegram Bot

AI models like Llama are powerful tools for building conversational applications. In this tutorial, I'll show you how to set up and deploy an AI-powered Telegram bot using Ollama (a self-hosted Llama model) and Node.js, running entirely on a virtual machine (VM) without a GPU.

You’ll learn how to:

  1. Install and run Llama3.2 model.
  2. Set up a Node.js application with PM2 for process management.
  3. Build a Telegram bot that interacts with the self-hosted AI instance. The full code is available in my GitHub repository

πŸš€ Prerequisites

Make sure you have:

  • A virtual machine (or server) running Ubuntu 20.04 or later.
  • No GPU? No problem! We'll use a smaller model that works with CPUs.

1️⃣ Step 1: Setting Up Ollama on the VM

Ollama is an easy-to-use platform for hosting AI models locally. Follow these steps to set it up.

Install Ollama
curl -fsSL https://ollama.com/install.sh | sh
Enter fullscreen mode Exit fullscreen mode
Verify Installation

Run the following command to ensure Ollama is working:

ollama serve
Enter fullscreen mode Exit fullscreen mode

This starts the Ollama server, exposing the API at http://localhost:11434.

Load a Model

Since we're using CPU-only, load a lightweight model:

ollama pull llama:3.2:1b
Enter fullscreen mode Exit fullscreen mode

You can explore other models in the Ollama model library


2️⃣ Step 2: Setting Up Node.js and PM2

We'll use Node.js to create a backend application that interacts with both Ollama and Telegram.

Install Node.js and PM2
sudo apt update
sudo apt install -y nodejs npm
sudo npm install -g pm2
Enter fullscreen mode Exit fullscreen mode
Initialize the Project

Create a folder for your project and initialize a Node.js application:

mkdir ai-telegram-bot
cd ai-telegram-bot
npm init -y
Enter fullscreen mode Exit fullscreen mode
Install Dependencies
npm install dotenv node-telegram-bot-api
Enter fullscreen mode Exit fullscreen mode
Create the Folder Structure

Organize your project as follows:

πŸ“ src/
β”œβ”€β”€ πŸ“ services/
β”‚   β”œβ”€β”€ πŸ“„ ollamaService.js   # Handles interactions with the local Ollama API
β”‚   └── πŸ“„ telegramBotService.js # Telegram bot logic and API integration
Enter fullscreen mode Exit fullscreen mode

3️⃣ Step 3: Writing the Code

Create ollamaService.js

This file handles interactions with the Ollama API. The function below sends a string to the API and returns a response, which can be forwarded to the Telegram user.
Refer to the Ollama API Documentation for additional features.

const axios = require("axios");

const apiUrl = "http://localhost:11434/api/generate";

async function queryOllama(message) {
  try {
    const response = await axios.post(apiUrl, {
      model: "llama3.2:1b",
      prompt: message,
      stream: false,
    });
    const respData = response.data.response.toString();
    return respData;
  } catch (error) {
    console.error(error);
    throw new Error("Failed to query Ollama");
  }
}

module.exports = { queryOllama };
Enter fullscreen mode Exit fullscreen mode

Future improvement: Implementing support for additional models like vision-compute models.

Create telegramBotService.js

This file contains the Telegram bot logic. You can find more details in the node-telegram-bot-api documentation

require("dotenv").config();
const TelegramBot = require("node-telegram-bot-api");
const { queryOllama } = require("./ollamaService");

const token = process.env.TELEGRAM_BOT_TOKEN;

const bot = new TelegramBot(token, { polling: true });

bot.on("message", async (msg) => {
  const chatId = msg.chat.id;
  switch (msg.text) {
    case "/start":
      bot.sendMessage(
        chatId,
        "Welcome to Ollama Bot! Send me a message and I will translate it to Ollama language."
      );
      break;
    case "/help":
      bot.sendMessage(
        chatId,
        "Send me a message and I will translate it to Ollama language using the llama3.2 model with 1 billion parameters. I can make mistakes, double check important infos."
      );
      break;
    default:
      bot.sendChatAction(chatId, "typing");
      try {
        const response = await queryOllama(msg.text);
        bot.sendMessage(chatId, response);
      } catch (error) {
        bot.sendMessage(
          chatId,
          "Oops, something went wrong. Please try again."
        );
      }
  }
});
Enter fullscreen mode Exit fullscreen mode

Future improvement: Adding more commands or integrating additional services.


4️⃣ Step 4: Create a Telegram Bot

Create your bot:
  1. Open Telegram and search for the user BotFather.
  2. Start a chat with BotFather and send the command /newbot.
  3. Follow the instructions, providing a name and username for your bot.
  4. Once completed, BotFather will send you a token. This token is used to authenticate your bot. Save this token in your .env file:
TELEGRAM_BOT_TOKEN=your_telegram_bot_token
Enter fullscreen mode Exit fullscreen mode

5️⃣ Step 5: Running the Bot

Start the Application

Start the bot using PM2 for process management:

pm2 start src/services/telegramBotService.js --name ai-telegram-bot
pm2 save
Enter fullscreen mode Exit fullscreen mode
Check Logs

Ensure the bot is running correctly by checking the logs:

pm2 logs ai-telegram-bot
Enter fullscreen mode Exit fullscreen mode

Now, your bot is live and ready to interact with users on Telegram!


If your bot doesn’t respond, wake it up with this endpoint:

curl --location --request POST 'https://api.telegram.org/bot{your-token-here}/getMe'
Enter fullscreen mode Exit fullscreen mode

6️⃣ How It Works

  1. Users send a message to the Telegram bot.
  2. The message is forwarded to the Ollama API for processing.
  3. Ollama generates a response using the Llama model.
  4. The bot sends the response back to the user in Telegram.

πŸ“’ Final Thoughts

Congratulations! You've successfully built and deployed an AI-powered Telegram bot using a self-hosted Llama model.

Feel free to experiment with more advanced models or customize the bot further. If you encounter issues, leave a comment or fork the GitHub repository for more examples and enhancements.

Happy coding! πŸš€

Top comments (0)