DEV Community

Diego Orofino
Diego Orofino

Posted on • Originally published at Medium

Implementing a Multi-Functional Agentic RAG System with .NET and Azure AI - Part 1

The field of artificial intelligence is rapidly evolving, with agent-based systems playing a crucial role in making AI interactions more dynamic and intelligent. Retrieval-Augmented Generation (RAG) systems enhance AI capabilities by integrating real-time data retrieval with generative AI models, providing more accurate and relevant responses.

This article focuses on building an AI agent using .NET 9, Azure AI, and Microsoft Semantic Kernel to process natural language queries dynamically. This agent will:

  • Capture and analyze user queries.
  • Extract intent and relevant entities.
  • Route queries to the appropriate API or knowledge retrieval system.
  • Dynamically generate responses.

By the end of Part 1, we will have a fully functional console-based AI agent capable of processing natural language queries and classifying intent.

1. Setting Up the Development Environment

Before proceeding, ensure you have an Azure account to access Azure AI services. Follow the Azure OpenAI Assistants Quickstart Guide to set up the required resources.

To build this AI agent, you will need:

  • .NET 9 SDK: Install from here.
  • Microsoft Semantic Kernel: Install via NuGet.
  • Visual Studio Code (or any preferred editor).

Installing Required Packages

First, create a new .NET 9 console application:

sh
mkdir AI_Agent
cd AI_Agent
dotnet new console
Enter fullscreen mode Exit fullscreen mode

Next, install the required dependencies:

# Install Microsoft Semantic Kernel
dotnet add package Microsoft.SemanticKernel

# Install Azure AI SDK
dotnet add package Microsoft.SemanticKernel.Connectors.OpenAI
Enter fullscreen mode Exit fullscreen mode

2. Implementing Query Interception and Parsing

The AI agent must capture and analyze user queries before making decisions. We will use Microsoft Semantic Kernel to extract key elements from user input.

Query Processing Implementation

using System;
using System.Collections.Generic;
using System.Threading.Tasks;
using Microsoft.SemanticKernel;
using Microsoft.SemanticKernel.Connectors.OpenAI;

class Program
{
    static async Task Main()
    {
        Console.WriteLine("AI Agent Initialized. Enter your query:");
        string userQuery = Console.ReadLine();
        await ProcessQuery(userQuery);
    }

    static async Task ProcessQuery(string query)
    {
        var client = new Azure.AI.OpenAI.AzureOpenAIClient(new Uri("YOUR_OPENAI_ENDPOINT"), 
                                                           new Azure.AzureKeyCredential("YOUR_OPENAI_KEY"));
        var kernel = Kernel.CreateBuilder().AddAzureOpenAIChatCompletion("YOUR_OPENAI_MODEL", client)
            .Build();

        const string summarizePrompt = @"
        Please summarize the following text:
        {{$input}}
        ";

        var summarizationFunction = kernel.CreateFunctionFromPrompt(summarizePrompt);

        var result = await kernel.InvokeAsync(summarizationFunction, new KernelArguments { { "input", query } });

        Console.WriteLine("Extracted Entities:");
        Console.WriteLine(result);
    }
}
Enter fullscreen mode Exit fullscreen mode

Explanation:

  • The code initializes the AzureOpenAIClientwith your endpoint and key. Remember to replace the placeholders with your actual values.
  • It then creates a Kernel and adds the Azure OpenAI chat completion connector, specifying your model name. Again, replace the placeholder.
  • A prompt is defined to instruct the model to summarize the input text.
  • kernel.CreateFunctionFromPrompt creates a reusable function from this prompt.
  • kernel.InvokeAsync executes the function, passing the user query as input.
  • The result (the summarization) is then printed to the console. It’s important to use result.Result to access the string value.

3. Enhancing Intent Recognition

The AI must determine the user's intent, such as requesting weather updates or stock prices. We use rule-based classification for this purpose.

Intent Recognition Implementation

enum IntentType
{
    Weather,
    StockMarket,
    Unknown
}

static IntentType RecognizeIntent(string query)
{
    if (query.Contains("weather")) return IntentType.Weather;
    if (query.Contains("stock")) return IntentType.StockMarket;
    return IntentType.Unknown;
}

Enter fullscreen mode Exit fullscreen mode

Explanation:

  • An enumdefines the possible intent types.
  • The RecognizeIntent function uses simple string matching to determine the intent. I've added StringComparison.OrdinalIgnoreCase to make the comparisons case-insensitive, which is generally better for user input.

4. Developing Decision-Making Processes

Once intent is identified, the AI must decide how to respond.

Decision-Making Logic Implementation

static async Task ExecuteDecision(IntentType intent)
{
    switch (intent)
    {
        case IntentType.Weather:
            Console.WriteLine("Fetching weather data...");
            break;
        case IntentType.StockMarket:
            Console.WriteLine("Fetching stock price...");
            break;
        default:
            Console.WriteLine("I’m not sure how to handle this request.");
            break;
    }
}
Enter fullscreen mode Exit fullscreen mode

Explanation:

  • The ExecuteDecision function uses a switch statement to determine the appropriate action based on the identified intent. Currently, it just prints placeholder messages.

5. Complete Code: Program.cs

Below is the final assembled code, integrating query processing, intent recognition, and decision execution into a fully functional AI agent.

using System;
using System.Collections.Generic;
using System.Threading.Tasks;
using Microsoft.SemanticKernel;
using Microsoft.SemanticKernel.Connectors.OpenAI;

class Program
{
    static async Task Main()
    {
        Console.WriteLine("AI Agent Initialized. Enter your query:");
        string userQuery = Console.ReadLine();
        await ProcessQuery(userQuery);
    }

    static async Task ProcessQuery(string query)
    {
        var client = new Azure.AI.OpenAI.AzureOpenAIClient(new Uri("YOUR_OPENAI_ENDPOINT"), 
                                                           new Azure.AzureKeyCredential("YOUR_OPENAI_KEY"));
        var kernel = Kernel.CreateBuilder().AddAzureOpenAIChatCompletion("YOUR_OPENAI_MODEL", client)
            .Build();

        const string summarizePrompt = @"
        Please summarize the following text:
        {{$input}}
        ";

        var summarizationFunction = kernel.CreateFunctionFromPrompt(summarizePrompt);

        var result = await kernel.InvokeAsync(summarizationFunction, new KernelArguments { { "input", query } });

        Console.WriteLine("Extracted Entities:");
        Console.WriteLine(result);

        IntentType intent = RecognizeIntent(query);
        await ExecuteDecision(intent);
    }

    static IntentType RecognizeIntent(string query)
    {
        if (query.Contains("weather")) return IntentType.Weather;
        if (query.Contains("stock")) return IntentType.StockMarket;
        return IntentType.Unknown;
    }

    static async Task ExecuteDecision(IntentType intent)
    {
        switch (intent)
        {
            case IntentType.Weather:
                Console.WriteLine("Fetching weather data...");
                break;
            case IntentType.StockMarket:
                Console.WriteLine("Fetching stock price...");
                break;
            default:
                Console.WriteLine("I’m not sure how to handle this request.");
                break;
        }
    }
}

enum IntentType
{
    Weather,
    StockMarket,
    Unknown
}

Enter fullscreen mode Exit fullscreen mode

6. Running the Program

To run the program, navigate to the project directory in your terminal and execute the following command:

dotnet run
Enter fullscreen mode Exit fullscreen mode

This will compile and run the .NET application. The console will display the message “AI Agent Initialized. Enter your query:”. You can then enter your query.

Example 1: Weather Query

dotnet run
AI Agent Initialized. Enter your query:
how is the weather in new york?
Extracted Entities:
The text is asking about the current weather in New York.
Fetching weather data...
Enter fullscreen mode Exit fullscreen mode

Example 2: Stock Query

dotnet run
AI Agent Initialized. Enter your query:
Whats the price of the stock for apple?
Extracted Entities:
The text is asking about the current stock price of Apple.
Fetching stock price...
Enter fullscreen mode Exit fullscreen mode

Example 3: Unknown Query

dotnet run
AI Agent Initialized. Enter your query:
Tell me a joke.
Extracted Entities:
The text is requesting a joke.
I’m not sure how to handle this request.
Enter fullscreen mode Exit fullscreen mode

Explanation of the Results:

  • The “Extracted Entities” section shows the summarization provided by the Azure OpenAI model. This helps to confirm that the model is correctly interpreting the user’s intent. The exact wording of the summarization might vary slightly depending on the model and the specific query.
  • The subsequent output (“Fetching weather data…”, “Fetching stock price…”, or “I’m not sure how to handle this request.”) reflects the decision made by the ExecuteDecision function based on the recognized intent.

7. Summary of Enhancements

In Part 1, I implemented:

✅ Query processing using Microsoft Semantic Kernel.
✅ Intent recognition for weather and stock-related queries.
✅ Decision-making logic to determine AI responses.
✅ A fully functional console-based AI agent.

Important: This Part 1 implementation only provides placeholder messages (e.g., “Fetching weather data…”). In Part 2, these placeholders will be replaced with actual API calls to retrieve real-time data. Also, remember to replace the placeholder API Key, Endpoint, and Model name in the code with your actual Azure OpenAI credentials.

In Part 2, I will integrate real-time data retrieval using external APIs like OpenWeather and FinancialModelingPrep. 🚀

➡️ Next: Core Functionalities — Part 2 (of 3) (SOON)

Top comments (0)