DEV Community

Cover image for Building a Multi-Turn-Assistant Application using Llama, Claude and GPT4o
Aryan Kargwal
Aryan Kargwal

Posted on

Building a Multi-Turn-Assistant Application using Llama, Claude and GPT4o

💻Github: https://github.com/aryankargwal/genai-tutorials/tree/main/multi-turn-agents
🎥Youtube: https://youtu.be/S9iHpExFrTs

In this guide, we’ll explore the development of a multi-turn AI assistant application using LLMs and AI assistant integration. This application is designed to streamline complex workflows such as internet retrieval, market research, campaign generation, and image creation. Throughout this process, we will rely on Tune Studio for AI model orchestration, and Streamlit for the front-end user interface. The end goal is to create a fully automated assistant-led pipeline that performs end-to-end tasks by interacting with multiple AI assistants in a sequential manner—also known as a multi-turn workflow.


What is a Multi-Turn AI Assistant Application?

In the context of AI and automation, a multi-turn assistant application is one where multiple interactions (or "turns") are required to complete a task. The application maintains context throughout these turns and allows each assistant or model to perform specific sub-tasks in a coordinated manner. This approach contrasts with single-turn applications, where the AI assistant addresses a single user query without needing to track prior inputs or outputs.

In this tutorial, the multi-turn approach allows AI assistants to collaborate across multiple steps:

  1. Market Research Assistant gathers data from the web.
  2. Analytics Assistant processes the research and generates insights.
  3. Campaign Generation Assistant creates marketing strategies.
  4. Image Generation Assistant produces a campaign poster.

Each assistant plays a crucial role and passes context to the next in line, ensuring a smooth and coherent user experience.


What Are AI Assistants?

AI assistants are digital agents powered by machine learning models that help users perform tasks, answer questions, and provide recommendations. Unlike co-pilots or AI agents, AI assistants focus on assisting with user-driven tasks, such as scheduling meetings, performing web searches, or, in our case, handling marketing tasks.

Assistants v Copilots v Agents

There are three distinct categories of LLM-driven tools:

  • AI Assistants: Designed to respond to user commands and requests. Common examples include virtual assistants like Siri or Alexa, but they can also handle more specialized workflows.

  • Co-Pilots: These tools work alongside humans, helping improve tasks as they are being performed. Examples include Grammarly for writing and GitHub Copilot for coding.

  • AI Agents: Autonomous agents that perform tasks without user input, such as ReACT or Agentic Workflows.

In our application, the AI assistants are key players in achieving each part of the task while ensuring user control and input at every step. Now, let’s break down how we’ve integrated multiple assistants to create a seamless marketing and campaign generation tool.


Step-by-Step Breakdown of the Application

Application Flow

1. Performing Market Research with an AI Assistant

In this first step, the AI assistant is responsible for gathering relevant information from the internet. We use a Llama 3.1 model fine-tuned for research tasks to collect numerical data, trends, and insights from across the web.

Here’s the core code for this assistant's function:

def call_market_research_assistant(query):
    payload = {
        "temperature": 0.8,
        "messages": [{"role": "user", "content": query}],
        "model": "kargwalaryan/research",
        "stream": False,
        "frequency_penalty": 0,
        "max_tokens": 100
    }
    response = requests.post(research_url, headers=headers, data=json.dumps(payload))
    return response.json()
Enter fullscreen mode Exit fullscreen mode

This function sends a user query to the Tune Studio API, which uses a fine-tuned model to fetch relevant market research. The model acts as a subject matter expert on the specific topic or product the user inquires about.

2. Analyzing Research and Creating Insights

Once the data is gathered, the next assistant steps in to analyze the research. This assistant is run using Claude Sonnet, a model known for its compliance, safety, and conversational adaptability.

def call_analytics_assistant(research_text):
    user_content = f"Here is some market research data: {research_text}. Extract all the marketing insights and generate a campaign prompt."

    payload = {
        "temperature": 0.9,
        "messages": [
            {"role": "system", "content": "You are TuneStudio"},
            {"role": "user", "content": user_content}
        ],
        "model": "anthropic/claude-3.5-sonnet",
        "stream": False,
        "frequency_penalty": 0.2,
        "max_tokens": 300
    }
    response = requests.post(research_url, headers=headers, data=json.dumps(payload))
    return response.json()
Enter fullscreen mode Exit fullscreen mode

Here, the Claude Sonnet model processes the research and extracts stylistic and strategic insights that will inform the next step—campaign generation.

3. Generating the Marketing Campaign

For campaign generation, we need an assistant that not only understands the market analysis but can also create a compelling, structured campaign. The Claude Sonnet model shines in this area, as it generates an engaging and compliant campaign strategy based on market trends.

def generate_campaign(analysis_text):
    payload = {
        "temperature": 0.9,
        "messages": [
            {"role": "system", "content": "Generate a marketing campaign based on this analysis: {analysis_result}."}
        ],
        "model": "kargwalaryan/campaign-gen",
        "stream": False,
        "frequency_penalty": 0.2,
        "max_tokens": 150
    }
    response = requests.post(research_url, headers=headers, data=json.dumps(payload))
    return response.json()
Enter fullscreen mode Exit fullscreen mode

This assistant pulls from the insights gathered and creates a comprehensive campaign that could be deployed over the next few months.

4. Image Generation for the Campaign Poster

The final assistant in this pipeline generates a visual representation—a campaign poster—using GPT4o. This model specializes in image creation tasks based on textual descriptions.

def call_image_generation(analysis_text):
    payload = {
        "temperature": 0.9,
        "messages": [
            {"role": "system", "content": "Generate a campaign poster based on this analysis: {analysis_result}"}
        ],
        "model": "kargwalaryan/image-gen",
        "stream": False,
        "frequency_penalty": 0.2,
        "max_tokens": 100
    }
    response = requests.post(research_url, headers=headers, data=json.dumps(payload))
    return response.json()
Enter fullscreen mode Exit fullscreen mode

This model generates a creative campaign poster based on the strategy developed in the earlier steps, completing the entire marketing pipeline.


Why Use Multi-Turn Assistant Workflows?

Multi-turn workflows allow for complex tasks to be broken into smaller, manageable operations, each handled by a specialized AI assistant. This ensures that the final output is not only accurate but also aligned with the user's overall goals.

Some of the key advantages of multi-turn workflows include:

  • Context Retention: The application retains context across different stages of the workflow. This allows each assistant to build upon the work of previous assistants.
  • Task Specialization: Each assistant is optimized for a specific sub-task, ensuring higher performance in individual areas like research, analysis, campaign generation, and image creation.
  • Flexibility and Customization: You can easily modify or swap out assistants to suit different applications. For example, you could replace the market research assistant with one better suited to another industry or domain.

Conclusion

Creating a multi-turn AI assistant application allows you to harness the power of multiple LLMs and assistants to handle complex tasks in a highly structured way. By breaking down tasks into distinct stages and integrating models like Llama 3.1, Claude Sonnet, and GPT4o, you can build intelligent, autonomous pipelines that help users with everything from market research to visual content creation.

This approach is ideal for applications where tasks need to be completed in a step-by-step manner while maintaining context across all steps.

Let me know if you have any questions or suggestions for further improvement! Stay tuned for more advanced tutorials on LLMs and VLMs.

Top comments (0)