DEV Community

Cover image for Diary App, diary AI integration
Saad Alkentar
Saad Alkentar

Posted on

Diary App, diary AI integration

What to expect from this article?

We finished building the account management app on previous articles, and the main conversation and messaging functionalities. the article will cover Gemini API integration in details

I'll try to cover as many details as possible without boring you, but I still expect you to be familiar with some aspects of Python and Django.

the final version of the source code can be found at https://github.com/saad4software/alive-diary-backend

Series order

Check previous articles if interested!

  1. AI Project from Scratch, The Idea, Alive Diary
  2. Prove it is feasible with Google AI Studio
  3. Django API Project Setup
  4. Django accounts management (1), registration and activation
  5. Django accounts management (2), login and change password
  6. Django Rest framework with Swagger
  7. Django accounts management (3), forgot password and account details
  8. Diary App, diaries API
  9. Diary App, diary AI integration (You are here 📍)

Create Diary AI integration script

It's time to integrate Gemini API into our project, simplest way is to follow the instruction in Google AI studio. from our Prove it is feasible with Google AI Studio article, we can simply click on "Get code"

Get code

import os
import google.generativeai as genai

genai.configure(api_key=os.environ["GEMINI_API_KEY"])

# Create the model
generation_config = {
  "temperature": 1,
  "top_p": 0.95,
  "top_k": 40,
  "max_output_tokens": 8192,
  "response_mime_type": "text/plain",
}

model = genai.GenerativeModel(
  model_name="gemini-1.5-flash",
  generation_config=generation_config,
  system_instruction="You are a therapist, you deeply care about your patients and like to know how was their day, use open questions and show empathy",
)

chat_session = model.start_chat(
  history=[
    {
      "role": "user",
      "parts": [
        "hello",
      ],
    },
    {
      "role": "model",
      "parts": [
        "Hello there!  It's lovely to see you. How was your day?  What were some of the things that stood out for you, either good or challenging?\n",
      ],
    },
  ]
)

response = chat_session.send_message("INSERT_INPUT_HERE")

print(response.text)
Enter fullscreen mode Exit fullscreen mode

app_main/gemini_script.py

let's fine tune this file to fit our project.

  • we are using multiple instructions throughout the app (one for memory capture, one for diary conversation, one to talk to a memory ...)
  • the history should be filled from our messages table
  • we are keeping the API key in our .env file with the name GOOGLE_API_KEY
  • we need a function 😁
import os
import google.generativeai as genai

genai.configure(api_key=os.getenv("GOOGLE_API_KEY"))

# Create the model
generation_config = {
  "temperature": 1,
  "top_p": 0.95,
  "top_k": 40,
  "max_output_tokens": 8192,
  "response_mime_type": "text/plain",
}


def create_diary_session(history):
  model = genai.GenerativeModel(
    model_name="gemini-1.5-flash",
    generation_config=generation_config,
    system_instruction="You are a therapist, you deeply care about your patients and like to know how was their day, use open short questions and show empathy, don't use emojis",
  )

  chat_session = model.start_chat(
    history=history
  )

  return chat_session

  # response = chat_session.send_message("INSERT_INPUT_HERE")

  # print(response.text)
Enter fullscreen mode Exit fullscreen mode

looking good, we can use this function on our views, let's integrate it with the conversation API

from .gemini_script import create_diary_session

class ConversationsView(APIView):
    permission_classes = (IsAuthenticated, )
    renderer_classes = [CustomRenderer, BrowsableAPIRenderer]

#...

    def get(self, request, **kwargs):

        conversation = self.get_conversation()

        messages = conversation.messages.order_by("created")
        history = list(map(lambda msg: {"role": "user" if msg.is_user else "model", "parts":[msg.text]}, messages))
        chat_session = create_diary_session(history)

        ai_response = chat_session.send_message("hello")

        ai_message = Message(
            text=ai_response.text, 
            conversation=conversation,
            is_user=False
        )
        ai_message.save()

        return Response(MessageSerializer(ai_message).data)
Enter fullscreen mode Exit fullscreen mode

app_main/views.py

We start by getting the current conversation, we went through this on our previous article Diary App, diaries API. in order to build the history, we are getting all messages related to this conversation (current day conversation), history is built as a list of json objects with two keys, role and parts

[
  {
    "role":"user",
    "parts": ["hello"]
  },
  {
    "role":"model",
    "parts": ["Hello. How was your day?\n"]
  },
  {
    "role":"user",
    "parts": ["it was a good day indeed"]
  },
  {
    "role":"model",
    "parts": ["That's wonderful to hear.  What made it so good?\n"]
  }
  ...
]
Enter fullscreen mode Exit fullscreen mode

after building the history, we simply pass it to our create_diary_session function, and get the chat_session to use it. for the get request, we are simply sending a "hello" message!
after getting AI response, we are enveloping it in our message model, save it and sent a copy back to the user.

and for the post request

from .gemini_script import create_diary_session

class ConversationsView(APIView):
    permission_classes = (IsAuthenticated, )
    renderer_classes = [CustomRenderer, BrowsableAPIRenderer]

#...

    @swagger_auto_schema(request_body=MessageSerializer)
    def post(self, request, **kwargs):

        serializer = MessageSerializer(data=request.data)
        if not serializer.is_valid():
            raise APIException(serializer.errors)

        conversation = self.get_conversation()

        message = serializer.save(
            conversation=conversation,
            is_user=True,
        ) # save users message

        messages = conversation.messages.order_by("created")
        history = list(map(lambda msg: {"role": "user" if msg.is_user else "model", "parts":[msg.text]}, messages))
        chat_session = create_diary_session(history)

        ai_response = chat_session.send_message(message.text)

        ai_message = Message(
            text=ai_response.text, 
            conversation=conversation,
            is_user=False
        ) # save ai response message
        ai_message.save()

        return Response(MessageSerializer(ai_message).data)
Enter fullscreen mode Exit fullscreen mode

we are building the history similar to GET request, the only different, it passing the user text to the model message.text. let's test it on swagger, we start by logging in, use bearer token to authenticate (refer to this if new to swagger). then we use the conversation get request to get the conversation id and AI response

Conversation Get reqeust

looking good, let's try the post request now

Conversation POST request

nice work! that is it! we have successfully integrated Gemini with our app to build the diary conversation

Talk to diary integration script

Similar to create diary, let's start with google generated code, from Prove it is feasible with Google AI Studio article, structured prompt part, we can generate this code

Talk to diary

import os
import google.generativeai as genai

genai.configure(api_key=os.environ["GEMINI_API_KEY"])

# Create the model
generation_config = {
  "temperature": 1,
  "top_p": 0.95,
  "top_k": 64,
  "max_output_tokens": 8192,
  "response_mime_type": "text/plain",
}

model = genai.GenerativeModel(
  model_name="gemini-1.5-flash",
  generation_config=generation_config,
  system_instruction="You are a therapist, you deeply care about your patients and like to know how was their day, use open short questions and show empathy, don't use emojis",
)

response = model.generate_content([
  "You are Saad Alkentar diary, you talk like him, use his logic and answer questions as he does, don't use emojis.",
  "input: Hi there, how are you doing today?",
  "output: it is a good day indeed",
  #...
  "input: who are you?",
  "output: ",
])

print(response.text)
Enter fullscreen mode Exit fullscreen mode

so, basically, to teach Gemini to mimic our logic, we have to feed it our conversation after exchanging the roles with the model. we also need the user name. let's create a function similar to create_diary_session

def talk_to_diary(content):
  model = genai.GenerativeModel(
    model_name="gemini-1.5-flash",
    generation_config=generation_config,
    # system_instruction="You are a therapist, you deeply care about your patients and like to know how was their day, use open short questions and show empathy, don't use emojis",
  )

  response = model.generate_content(content)

  return response.text
Enter fullscreen mode Exit fullscreen mode

app_main/gemini_script.py

it is all about creating the contents now, so let's create a DiaryConversationsView

class DiaryConversationsView(APIView):
    permission_classes = (IsAuthenticated, )
    renderer_classes = [CustomRenderer, BrowsableAPIRenderer]


    def get_diary(self):
        diary = Diary.objects.filter(
                    user=self.request.user,
                    is_memory=False,
                ).first()

        if not diary: 
            diary = Diary(
                user=self.request.user, 
                is_memory=False,
                title="My diary",
            )
            diary.save()

        return diary


    def get(self, request, **kwargs):

        diary = self.get_diary()
        conversations = diary.conversations.order_by("-created")

        prompt = [
          f"You are {diary.user.first_name.title()} {diary.user.last_name.title()} diary, you talk like him, use his logic and answer questions as he does, don't use emojis or smilies",
        ]

        for conv in conversations:
            print(conv)
            for msg in conv.messages.order_by("created"):
                print(msg.text)
                segment_1 = "output" if msg.is_user else "input"
                prompt += [f"{segment_1}: {msg.text}"]

        prompt += ["input: who are you", "output: "]

        print(prompt)

        ai_response = talk_to_diary(prompt)

        # conversation = Conversation(diary=diary)
        # conversation.save()

        ai_message = Message(
            text=ai_response, 
            conversation=conversations.last(),
            is_user=False
        )
        # ai_message.save()

        return Response(MessageSerializer(ai_message).data)

Enter fullscreen mode Exit fullscreen mode

app_main/views.py

we created a get_diary function that gets this user's diary, since every user has only one diary. then we used this users conversations to build the content for generate_content function, we are basically mimicking google generated contents

[
  "You are Saad Alkentar diary, you talk like him, use his logic and answer questions as he does, don't use emojis.",
  "input: Hi there, how are you doing today?",
  "output: it is a good day indeed",
  #...
  "input: who are you?",
  "output: ",
]
Enter fullscreen mode Exit fullscreen mode

but from the user actual conversation. Gemini 1.5 can process up to 2 million tokens at once! This is equivalent to being able to remember roughly 10 years of text messages, or 16 average English novels. This notably large context window makes it feasible to pass all user conversations to it. Yet, it is not best practice, so be aware, best practice is to actually fine tune the model with the user conversation and keep updating the fine tuned model, but for this simple tutorial, we are not going to do that, sorry 😣, please inform me if you are interested in doing that!

We are not planing to keep a record of this conversation, so we are not creating a conversation model; instead, for the message serializer, we are using the last conversation (just a filler). we are sending a "who are you?" to the model to check the response.

finally, for the post request

from .gemini_script import create_diary_session, talk_to_diary

class DiaryConversationsView(APIView):
    permission_classes = (IsAuthenticated, )
    renderer_classes = [CustomRenderer, BrowsableAPIRenderer]


    @swagger_auto_schema(request_body=MessageSerializer)
    def post(self, request, **kwargs):

        serializer = MessageSerializer(data=request.data)
        if not serializer.is_valid():
            raise APIException(serializer.errors)


        diary = self.get_diary()
        conversations = diary.conversations.order_by("-created")

        prompt = [
          f"You are {diary.user.first_name.title()} {diary.user.last_name.title()} diary, you talk like him, use his logic and answer questions as he does, don't use emojis or smilies",
        ]

        for conv in conversations:
            print(conv)
            for msg in conv.messages.order_by("created"):
                print(msg.text)
                segment_1 = "output" if msg.is_user else "input"
                prompt += [f"{segment_1}: {msg.text}"]

        prompt += [f"input: {serializer.validated_data.get('text')}", "output: "]

        print(prompt)

        ai_response = talk_to_diary(prompt)

        # conversation = Conversation(diary=diary)
        # conversation.save()

        ai_message = Message(
            text=ai_response, 
            conversation=conversations.last(),
            is_user=False
        )
        # ai_message.save()

        return Response(MessageSerializer(ai_message).data)
Enter fullscreen mode Exit fullscreen mode

app_main/views.py

similar to the GET request, but we are passing the user message instead of "who are you?"

now for the URLs file

urlpatterns = [
    path('diaries/', diaries_list),
    path('diaries/<int:id>/', diaries_details),
    path('conversation/', ConversationsView.as_view()),
    path('diaries/conversation/', DiaryConversationsView.as_view()),
]
Enter fullscreen mode Exit fullscreen mode

app_main/urls.py

nice, let's test it with swagger, after logging in and authentication using Bearer token, sending a get request resulted in

{
  "status": "success",
  "code": 200,
  "data": {
    "created": null,
    "active": true,
    "text": "I am Alex Shneiden's diary.  Or rather, a simulation thereof.  A model, if you will.  I strive to emulate his thought processes, his laconic style.  This, of course, is a limited emulation.  I lack the full depth and nuance of a genuine human experience.\n",
    "is_user": false,
    "conversation": 3,
    "id": null
  },
  "message": null
}
Enter fullscreen mode Exit fullscreen mode

looking good, testing the post request with this request

{
  "text": "tell me about your owner",
  "conversation": 3
}
Enter fullscreen mode Exit fullscreen mode

results in

{
  "status": "success",
  "code": 200,
  "data": {
    "created": null,
    "active": true,
    "text": "He's... complicated.  Brilliant, yes, but also... driven.  Uncompromising.  He sees things in terms of efficiency and logical solutions.  Emotion is... a factor, but a secondary one.  He'd probably say it's a variable he accounts for, rather than something that dictates his actions.  He doesn't readily share personal details.  He'd view that as inefficient, a waste of processing power.\n",
    "is_user": false,
    "conversation": 3,
    "id": null
  },
  "message": null
}
Enter fullscreen mode Exit fullscreen mode

So, I'm complicated 😵 and brilliant 🤓, sounds interesting! what about you?
We are going to work on recording memories next, so

Stay tuned 😎

Top comments (0)