Some background first
Yeah, we've all become pretty used to ChatGPT, to the point where we sometimes catch ourselves thinking, "Really, was that your best?" Sure, some replies leave us scratching our heads. Yet, it's astounding to reflect on how swiftly we've adapted to this technology. It's simply nuts.
With that in mind, I always try to look closely at new features instead of just accepting them as they are, hoping to find some really useful parts.
This was exactly the case when I fist saw the new OpenAI assistant feature. Initially, I was skeptical, failing to see what set it apart.
However, things got interesting when I realized that beyond adding a dash of personality to the assistant and fine-tuning it, you could also inject live data into conversations as needed (using functions).
After this realization and a few brainstorming sessions, we at Qomprendo saw potential and decided to give it a go.
Well It turned out to be just perfect so we used that technology to develop a brand new wellbeing coach, that's able to interact with users feedbacks and act as a personal work wellbeing assistant.
The Architecture
I decided to share the blueprint of the project, stripped of any proprietary logic, offering a foundation for anyone looking to build, train, and deploy their own assistant.
The source code leverages Serverless framework to simplify the deployment.
The setup is straightforward: there is a web-socket api gateway that handles queries and delivers responses, an SQS queue, a DynamoDB table and few Lambdas here and there (of course is AWS).
Important note: Currently, OpenAI doesn't support a direct response hook system, leaving us to rely on polling, which can be quite tedious. But, no worriesโI've got you covered. Within the project, you'll find a custom class designed to interact seamlessly with the OpenAI API, smoothing over the polling process with an efficient exponential backoff system.
How to adapt
As I mentioned earlier, this is basically a starting point or a foundation for building your own assistant. It covers the essentials of message flow. Here are some tips to enhance it:
- Keep track of messages on your end. Even though OpenAI allows you to fetch all messages in a thread, you might want to monitor specific details like when a message was sent or seen, among other things.
- Develop your own way of managing threads. To demonstrate, the project starts a new thread each time someone opens a new socket connection. However, you might prefer linking threads to individual users or giving users the choice to start a new thread whenever they want. The approach you take is entirely up to you.
- Handle errors effectively. Depending on what your project needs, you might try sending a message several times if it doesn't get through at first, or you might just skip it and report the error. It's your call. A tip: experiment with the retry logic in the SQS trigger to set up retries.
- Incorporate notifications. Given OpenAI's response times can be slow, there's a chance users might leave the chat before getting a reply. Think about adding a notification feature for instances when you have an answer ready but the user's socket connection has already closed.
Why Serverless
Whenever people asks me that, I always think, why not? It cheaper, It's easier to maintain and leveraging the cloud, half of the job is already done.
Source code
You can explore the github repo for all the details on how to deploy and tailor the project to fit your needs. Whether you're aiming to develop a digital assistant of your own or just curious about the technology, everything you need is right there.
Top comments (0)