TLDR;
By the end of this guide, we will have a fully functioning Slack bot that can answer our questions about FL0 and its features using AI π€β¨.
Introduction
The advent of OpenAI's API has empowered countless developers to create sophisticated chatbots without breaking a sweat π§βπ».
We've noticed that there's a considerable amount of curiosity within the developer community regarding the workings and features of FL0. This gave us the idea to build a simple chatbot using the GPT API.
In this article, we would be building a slack chatbot named Fl0Bot
which could answer questions regarding FL0
. π¬
We would be using NodeJs
for our backend and Postgres
as database. Then we would be deploying our application effortlessly with the help of FL0
π.
As we prepare to embark on this journey, let's kick things off with a little humor. Here's an xkcd comic strip to lighten the mood π
Getting Started
Let's start with building our chatbot π¬.
To speed up things, in this tutorial we would be using the "fl0zone/blog-express-pg-sequelize" template.
You may refer to this blog for more details regarding the tutorial π
https://blog.fl0.com/building-a-software-startup-on-fl0-part-1-f0624174e738
In this template we have our basic NodeJs application and postgres database dockerized.
Here's our docker-compose.yaml
file for the same π³
version: "3"
services:
app:
build:
context: .
target: development
env_file: .env
volumes:
- ./src:/usr/src/app/src
ports:
- 8081:80
depends_on:
- db
db:
image: postgres:14
restart: always
environment:
POSTGRES_USER: admin
POSTGRES_PASSWORD: admin
POSTGRES_DB: my-startup-db
volumes:
- postgres-data:/var/lib/postgresql/data
ports:
- 5432:5432
volumes:
postgres-data:
Folder Structure
Before we get started, here's a look at our final project folder structure for reference π
And here's a high level overview of what we are gonna build π
Now, let's delve into the code π§βπ»
Step 1: Project Setup
After we have created our new project using the above template, we would first need to install a few packages.
npm install axios @slack/bolt openai uuid
After this we would need to get our OpenAI API key π.
For this we would need to create our account at platform.openai.com.
After this we would select the "API" option, click on "View API Keys" in account options.
Now, we would need to go ahead and create a new API key as shown below π
Step 2: Config Setup
We would create a .env.example
file to list the environment variables just for reference π
NODE_ENV=development
DATABASE_URL=postgres_url
BOT_SYSTEM=system_prompt
OPENAI_API_KEY=open_api_key
SLACK_WEBHOOK=slack_webhook
Then, we would need to go ahead and add these variables to our already present config file π
src/config/index.js
module.exports = {
"local": {
"use_env_variable": "DATABASE_URL",
"openai_api_key": "OPENAI_API_KEY",
"bot_system" : "BOT_SYSTEM",
"slack_webhook" : "SLACK_WEBHOOK",
synchronize: true
},
"development": {
"use_env_variable": "DATABASE_URL",
"openai_api_key": "OPENAI_API_KEY",
"bot_system" : "BOT_SYSTEM",
"slack_webhook" : "SLACK_WEBHOOK",
synchronize: true
},
"production": {
"use_env_variable": "DATABASE_URL",
"openai_api_key": "OPENAI_API_KEY",
"bot_system" : "BOT_SYSTEM",
"slack_webhook" : "SLACK_WEBHOOK",
synchronize: true
}
}
Step 3: Creating Models
Now let's get started with setting up our database. As we are using sequelize ORM
, we would need to create models for our postgres database π.
Here we would need to create a Chat
in which we would be storing all the communication between the FL0Bot
and User
.
Everytime a new request is made, we SELECT
the recent chats from this database and send it for reference to the FL0Bot. π¬
src/models/chat.js
'use strict';
const { Sequelize, DataTypes } = require('sequelize');
module.exports = (sequelize) => {
const Chat = sequelize.define(
'Chat',
{
chat_id: {
type: DataTypes.UUID,
primaryKey: true,
defaultValue: Sequelize.UUIDV4,
},
person_id: {
type: DataTypes.STRING,
allowNull: false,
},
role: {
type: DataTypes.STRING,
},
content: {
type: DataTypes.STRING(10000)
},
time_created: {
type: DataTypes.DATE,
defaultValue: DataTypes.NOW,
},
time_updated: {
type: DataTypes.DATE,
defaultValue: DataTypes.NOW,
},
},
{
tableName: 'chats', // Specify the table name explicitly if different from the model name
timestamps: false, // Disable timestamps (createdAt, updatedAt)
hooks: {
beforeValidate: (chat, options) => {
// Update the time_updated field to the current timestamp before saving the record
chat.time_updated = new Date();
},
},
}
);
return Chat;
};
Step 4: Creating the Chat Bot
Now let's move on to writing the code for our ChatBot! π€
First we would create our handleAppMention
function.
Here we're parsing the text message, excluding any mentions, then looking for an existing user chat session or creating one if it doesn't exist.
We're fetching the last five chat messages to maintain the context of the conversation. π¬β¨
Here we're leveraging OpenAI's API to get a completion response to the user's input. π€
We are also adding a system
in the conversation which is in the config.bot_system
. This provides GPT the context about FL0
.
Example GPT System Prompt
You are a bot that answers queries only around a specific product: fl0 and you will tell nothing about any other product or tools. FL0 is a platform for easily deploying your code as containers. Just push code to your repo and FL0 will build and deploy your app to a fully managed infrastructure complete with databases, logging, multiple environments and lots more!
src/index.js
async function handleAppMention({event}) {
const mentionRegex = /<@[\w\d]+>/g; // Regex pattern to match the mention
const msg = event.text.replace(mentionRegex, '');
const person_id = event.user;
const query = msg;
try {
const userExists = await Chat.findOne({ where: { person_id: person_id }, raw: true });
if (!userExists) {
const dbChat = await Chat.create({ person_id: person_id, role: 'system', content: process.env[config.bot_system] });
}
const chats = await Chat.findAll({ where: { person_id }, order: [['time_created', 'DESC']], limit: 5, raw: true });
const chatsGpt = chats.map((item) => ({ role: item.role, content: item.content }));
chatsGpt.push({ role: 'user', content: query });
const response = await openai.createChatCompletion({
model: 'gpt-3.5-turbo',
messages: chatsGpt,
});
await Chat.bulkCreate([
{ person_id, role: 'user', content: query },
{ person_id, role: 'assistant', content: response.data.choices[0].message.content }
]);
await axios.post(process.env[config.slack_webhook], {text: response.data.choices[0].message.content});
return response.data.choices[0].message.content
} catch (error) {
console.log("ERROR",error)
return 'Failed to process chat';
}
}
π Coming to our routes, we've set up an endpoint (/slack/action-endpoint
) for Slack's action-events
, in response to app_mention
events.
And we are returning the response from handleAppMention
function.
This response would be sent back by our Slack Bot.
src/index.js
const express = require('express')
const { sequelize, Chat } = require('./models');
const process = require('process');
const env = process.env.NODE_ENV || 'development';
const config = require(__dirname + '/config/index.js')[env];
const axios = require('axios');
const app = express()
app.use(express.json());
const { Configuration, OpenAIApi } = require("openai");
const configuration = new Configuration({
apiKey: process.env[config.openai_api_key],
});
const openai = new OpenAIApi(configuration);
const port = process.env.PORT ?? 3000;
app.post('/slack/action-endpoint', async (req, res) => {
const { challenge } = req.body;
if (challenge) {
res.status(200).send(challenge);
} else {
try {
switch(req.body.event.type) {
case 'app_mention':
const response = handleAppMention(req.body)
res.status(200).json({ message: 'Success' });
break
default:
res.status(400).json({ message: 'Bad Request' });
break
}
} catch (error) {
console.error(`Error processing Slack event: ${error}`);
res.status(500).json({ message: error });
}
}
});
app.listen(port, async () => {
console.log(`Example app listening on port ${port}`)
try {
await sequelize.sync({ force: false });
await sequelize.authenticate();
sequelize.options.dialectOptions.ssl = false;
await sequelize.sync({ force: true});
console.log('Connection has been established successfully.');
} catch (error) {
console.error('Unable to connect to the database:', error);
}
});
Step 5: Deploying with FL0
Now that we have a functional API and database, its time to deploy them to a server! π
In this tutorial, we're utilizing FL0
, a platform expertly designed for straightforward deployment of dockerized NodeJS applications, fully integrated with a database.
We would just need to push our repo to GitHub
.π«Έ
Now we would be deploying our project just by "Connecting our GitHub
account" and selecting our project.
Then we would be adding our environment variables listed in .env.example
file.
You may find a detailed process of deployment in this blog π https://blog.fl0.com/building-a-software-startup-on-fl0-part-1-f0624174e738
Step 6: Setting up Slack App
Now that our project is set up, let's create our Slack App.
We would visit https://api.slack.com/apps and click on Create New App.
We would name our bot "FL0Bot" π
In the Event Subscriptions section, we would enable events, set the request URL, and subscribe to bot events: app_mention
We would also need to get our webhook and pass it as an environment variable to our FL0 hosting.
Conclusion
So, there we have it - a completely operational chatbot tailored to answer questions about FL0
and its features, built using NodeJs, Postgres, and OpenAI's GPT, and seamlessly deployed with FL0
!
Here's the link to our repository for reference β‘οΈ
Visit Fl0Bot Repo
The power of OpenAI's APIs and quick deployments with FL0, make it effortless to build our own AI bots ππ.
Head on to fl0.com to start building your own bots π§βπ».
Top comments (2)
Good
Hey @programmingwithsuman π
Glad you found it useful π