DEV Community

Cover image for LLMs in .NET Made Easy: Hands-On with Microsoft.Extensions.AI
ByteHide
ByteHide

Posted on • Originally published at bytehide.com

LLMs in .NET Made Easy: Hands-On with Microsoft.Extensions.AI

I don’t know about you, but I’m obsessed with the potential of Large Language Models (LLMs). Seriously, the things you can do with them these days just blow my mind. A few weeks ago, I was wrestling with this .NET project – a pretty standard customer support chatbot – and it hit me: “Why not supercharge this thing with an LLM?”

The idea of having the chatbot understand natural language, generate human-like responses, and even anticipate user needs was incredibly exciting. But then reality set in. Where do I even begin? How do I connect a powerful LLM to my .NET application? It felt like I was staring at a mountain of complexity.

That’s when I stumbled upon Microsoft.Extensions.AI. This awesome framework makes integrating LLMs into .NET projects surprisingly straightforward. It provides this elegant, unified approach that just works, no matter which LLM provider you prefer (OpenAI, Azure OpenAI, or even open-source options like Ollama).

Image description
Source

In this article, we’re going to dive headfirst into the world of LLMs in .NET. We’ll explore the core concepts of Microsoft.Extensions.AI, get our hands dirty with some real-world code examples, and unlock the power of LLMs to build truly intelligent applications.

Why LLMs Matter in .NET Development

Okay, so we’ve established that LLMs are pretty cool, but why should you, a .NET developer, care? Well, imagine this:

You’re building a complex e-commerce application. Instead of manually coding every product description, an LLM could automatically generate compelling and informative descriptions based on product data.
You’re developing a customer support system. An LLM can power a chatbot that understands natural language, provides instant answers to common questions, and escalates complex issues to human agents seamlessly.
You need to analyze mountains of text data. LLMs can help you extract key insights, identify trends, and even summarize lengthy documents in a fraction of the time it would take manually.

See what I mean? LLMs are like having an incredibly intelligent assistant that can understand and generate human-like text, opening up a world of possibilities for your .NET applications. They can help you:

Boost productivity: Automate tedious tasks, like writing boilerplate code or creating documentation.
Enhance user experiences: Build more intuitive and engaging applications that understand natural language and respond intelligently.
Solve complex problems: Analyze data, generate creative content, and make more informed decisions.

In the past, integrating LLMs into .NET projects was a real headache. But with the emergence of frameworks like Microsoft.Extensions.AI, it’s becoming easier than ever to harness the power of LLMs. Let’s take a closer look at how it works.

Getting Started with Microsoft.Extensions.AI

Microsoft.Extensions.AI is like the “magic bridge” that connects your .NET applications to the incredible world of LLMs. It’s designed to make the integration process as smooth and painless as possible.

But why do we need this framework in the first place? Well, let’s think about it:
Different providers, different APIs: LLMs are offered by various providers (OpenAI, Azure OpenAI, Cohere, etc.), each with its own API and quirks. Microsoft.Extensions.AI provides a unified interface, so you don’t have to wrestle with different APIs directly. It’s like having a universal remote for all your LLMs!
Abstraction is key: Microsoft.Extensions.AI abstracts away the complexities of LLM integration. You don’t need to worry about the low-level details of API calls, authentication, or response handling. It lets you focus on what matters most: building awesome applications!
Flexibility for the win: Want to switch from one LLM provider to another? No problem! Microsoft.Extensions.AI makes it easy to swap providers without rewriting your entire application. This gives you the freedom to experiment and choose the best LLM for your specific needs.

Think of Microsoft.Extensions.AI as a powerful toolbox that simplifies LLM integration in .NET. It provides you with the tools and abstractions you need to build intelligent applications quickly and efficiently.

Ready to see it in action? Let’s move on to the hands-on part and build our first LLM-powered application!

Hands-On: Building a Simple LLM Application

Alright, enough theory! Let’s get our hands dirty and build a simple LLM-powered application. How about a quick and dirty text summarizer?

Imagine you have a long article or document and you need to get the gist of it quickly. Our app will take the text as input and use an LLM to generate a concise summary.

Before we begin, make sure you have the following:

Image description

Docker: We’ll be using Ollama, which runs within a Docker container. If you don’t have Docker installed, you can download it from the Docker website.
.NET SDK: You’ll need the .NET SDK installed on your machine. You can download it from the official Microsoft website.
Ollama: Ollama is a tool that allows you to run LLMs locally. Follow these steps to get it up and running:
Install Ollama: Follow the instructions on the Ollama website to install it on your system.
Download a Model: Download a suitable LLM model from the Ollama library. For this example, we’ll use llama2. You can download it using the following command in your terminal: bash ollama pull llama2
Start Ollama: Start the Ollama service with the model you downloaded: bash ollama run llama2

1. Set up your project:

• Create a new .NET console application.
• Install the Microsoft.Extensions.AI NuGet package.

2. Configure your AI service:

In your Program.cs file, configure the IAiService with your preferred LLM provider. I’ll use Ollama in this example, as it allows us to run an LLM locally:

using Microsoft.Extensions.AI;

var builder = WebApplication.CreateBuilder(args);

builder.Services.AddAi(options =>
{
    options.Provider = AiProvider.Ollama;
    options.Ollama.Endpoint = "http://localhost:11434"; // Make sure Ollama is running
});

var app = builder.Build();

// ... (we'll add more code here soon)


Enter fullscreen mode Exit fullscreen mode

It’s worth noting that when integrating your LLM, you’ll typically need to provide an endpoint URL and an API key or token for authentication. Storing these credentials securely is paramount to prevent unauthorized access and potential misuse.

This is where a solution like ByteHide Secrets comes in. It offers a robust and secure way to manage your sensitive API keys and other credentials, ensuring your LLM integrations are protected.

Image description

3. Inject and use the IAiService:

Now, let’s inject the IAiService into our application and use it to interact with the LLM:

// ... (previous code)

app.MapGet("/", async (IAiService aiService) =>
{
    Console.WriteLine("Enter the text you want to summarize:");
    string text = Console.ReadLine();

    var response = await aiService.ChatAsync(new ChatRequest(
        new ChatMessage(ChatRole.User, $"Summarize this text for me:\n\n{text}"), 
        new ChatMessage(ChatRole.System, "You are a helpful AI assistant that summarizes text.")
    ));

    Console.WriteLine("\nSummary:");
    Console.WriteLine(response.Choices[0].Message.Content);
});

app.Run();

Enter fullscreen mode Exit fullscreen mode

4. Run your application:

Make sure you have Ollama running with a suitable LLM model. Then, run your .NET application. It will prompt you to enter text, send it to the LLM, and display the generated summary.

There you have it! A basic text summarizer built with just a few lines of code. This is just a taste of what you can achieve with LLMs in .NET.

In the next section, we’ll explore some more advanced LLM capabilities and how to use them in your applications.

Exploring Advanced LLM Capabilities

Okay, we’ve built a simple text summarizer. But LLMs can do so much more than that! Let’s explore some advanced capabilities and how to leverage them in your .NET applications.

Think of a prompt as an instruction or a question you give to the LLM. It’s like giving directions to a really smart but slightly unpredictable assistant. By crafting effective prompts, you can significantly improve the quality and relevance of the LLM’s output.

For example, instead of simply asking the LLM to “summarize this text,” you could provide more specific instructions:
• “Summarize this article in three bullet points, focusing on the key findings.”
• “Summarize this technical document for a non-technical audience.”
• “Summarize this news story in the style of a Shakespearean sonnet.” (Okay, maybe not that last one, but you get the idea!)

Fine-tuning

Fine-tuning is like taking a pre-trained LLM and teaching it to be an expert in a specific domain. This involves training the LLM on a smaller, specialized dataset.

For example, you could fine-tune an LLM on medical texts to make it better at understanding and generating medical reports. Or you could fine-tune it on legal documents to help with legal research and analysis.

Code Generation

LLMs are becoming increasingly good at generating code. Imagine describing the functionality you want in plain English, and the LLM writes the code for you! This can be a huge time-saver for .NET developers.

For example, you could ask the LLM to:

• “Generate a C# function that calculates the factorial of a number.”
• “Create a .NET class for managing customer data.”
• “Write a unit test for this method.”

Image description

Other Advanced Capabilities

LLMs can also be used for tasks like:
Sentiment analysis: Determining the emotional tone of a piece of text.
Question answering: Providing accurate answers to questions based on a given context.
Translation: Translating text between different languages.
Content creation: Generating creative content like poems, scripts, or even musical pieces.

By exploring these advanced capabilities and combining them with the power of .NET, you can build truly innovative and intelligent applications.

In the next section, we’ll look at some practical considerations for working with LLMs in real-world scenarios.

Beyond the Basics: Customizing Your LLM Integration

Microsoft.Extensions.AI provides a great starting point for working with LLMs in .NET. But what if you need more control or want to explore options beyond the built-in features?

Here’s where things get really interesting.

Working with Different LLM Providers:

Let’s say you want to use the Cohere API directly instead of relying on Microsoft.Extensions.AI’s abstraction. Here’s a basic example of how you might send a request to Cohere’s generate endpoint:

using System.Net.Http.Headers;
using System.Text.Json;

// ... other code ...

var client = new HttpClient();
client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", "YOUR_COHERE_API_KEY");

var requestBody = new 
{
    model = "command-xlarge-nightly", 
    prompt = "Write a short story about a robot who learns to love.",
    max_tokens = 300,
    temperature = 0.7
};

var content = new StringContent(JsonSerializer.Serialize(requestBody), System.Text.Encoding.UTF8, "application/json");

var response = await client.PostAsync("https://api.cohere.ai/generate", content);
response.EnsureSuccessStatusCode();

var responseBody = await response.Content.ReadAsStringAsync();

// Parse the response and extract the generated text
var result = JsonSerializer.Deserialize<CohereResponse>(responseBody);
Console.WriteLine(result.generations[0].text);

Enter fullscreen mode Exit fullscreen mode

Integrating Custom LLMs:

For this example, let’s imagine you have a custom LLM running as a gRPC service. Here’s a snippet demonstrating how you might interact with it:

// ... other code and necessary gRPC setup ...

var client = new YourCustomLLM.YourCustomLLMClient(channel);

var request = new GenerateRequest { Prompt = "Translate this sentence to Spanish: Hello, world!" };
var reply = await client.GenerateAsync(request);

Console.WriteLine(reply.GeneratedText);

Enter fullscreen mode Exit fullscreen mode

Extending Microsoft.Extensions.AI:

While Microsoft.Extensions.AI provides a solid foundation, you might need to extend its capabilities to fit specific needs. This might involve integrating with a new type of LLM, adding custom processing logic, or even contributing back to the open-source project.

Creating a Custom Provider:

Let’s imagine a scenario where you want to integrate with a fictional LLM provider called “Lexica” which has a unique API and authentication method. You could create a custom provider to handle the integration:

public class LexicaProvider : IAiService 
{
    private readonly HttpClient _client;
    private readonly string _apiKey;

    public LexicaProvider(HttpClient client, IConfiguration configuration)
    {
        _client = client;
        _apiKey = configuration["Lexica:ApiKey"]; 
    }

    public async Task<ChatResponse> ChatAsync(ChatRequest request, CancellationToken cancellationToken = default)
    {
        var lexicaRequest = new 
        {
            api_key = _apiKey,
            model = "lexica-text-v1",
            prompt = request.ToString()
        };

        var content = new StringContent(JsonSerializer.Serialize(lexicaRequest), System.Text.Encoding.UTF8, "application/json");
        var response = await _client.PostAsync("https://api.lexica.ai/generate", content, cancellationToken);
        response.EnsureSuccessStatusCode();

        var responseBody = await response.Content.ReadAsStringAsyn

Enter fullscreen mode Exit fullscreen mode

In this example:
• We define a LexicaProvider class that implements the IAiService interface.
• We inject an HttpClient to make API calls and IConfiguration to access the API key.
• The ChatAsync method constructs a request specific to the Lexica API, sends it, and then parses the response to create a ChatResponse object.

Real-World Considerations: Security, Performance, and Ethics

Alright, we’ve explored the technical aspects of integrating LLMs into your .NET applications. But before you unleash your AI-powered creations upon the world, there are some crucial real-world considerations to keep in mind.

Security

Protect your API keys: Treat your LLM API keys like gold! Never hardcode them directly in your application. Use secure storage mechanisms like environment variables, Azure Key Vault, or—better yet— ByteHide Secrets to keep them safe from prying eyes.
Input validation is key: Always validate user input before feeding it to your LLM. Malicious users could try to inject harmful prompts or manipulate your application’s behavior.
Data privacy matters: Be mindful of the data you send to and receive from the LLM. If you’re dealing with sensitive information, consider using techniques like differential privacy or federated learning to protect user privacy.

Performance

Optimize your prompts: Well-crafted prompts can significantly improve LLM performance. Avoid ambiguity and provide clear instructions to get the desired output efficiently.
Choose the right model: Different LLMs have different strengths and weaknesses. Select a model that’s optimized for your specific task and resource constraints.
Caching can be your friend: Cache frequently used responses to reduce latency and API costs.

Ethics

Bias awareness: LLMs are trained on massive datasets, which can contain biases and reflect societal prejudices. Be aware of these potential biases and take steps to mitigate their impact on your application.
Transparency is crucial: Be transparent with users about how your application uses LLMs. Explain how decisions are made and what data is being used.
Responsible use: Use LLMs responsibly and ethically. Avoid using them for harmful purposes or to spread misinformation.

By addressing these security, performance, and ethical considerations, you can build LLM-powered applications that are not only powerful but also safe, efficient, and responsible.

In the next section, we’ll wrap things up and look ahead to the exciting future of LLMs in .NET.

Looking Ahead: The Future of LLMs in .NET

The world of LLMs is constantly evolving, with new models, techniques, and applications emerging all the time. As a .NET developer, it’s an exciting time to be at the forefront of this AI revolution.

Here are some trends to watch:

More powerful LLMs: Expect to see even more powerful and capable LLMs in the future, with improved understanding, generation, and reasoning abilities.
Multimodal LLMs: LLMs that can process and generate not just text but also images, audio, and video are on the horizon. This opens up new possibilities for applications in areas like content creation, virtual assistants, and human-computer interaction.
Increased accessibility: LLMs are becoming more accessible, with easier-to-use tools and frameworks like Microsoft.Extensions.AI making integration smoother than ever.
Wider adoption in .NET: Expect to see LLMs being used in a wider range of .NET applications, from enterprise software to mobile apps and games.

How .NET Developers Can Stay Ahead:

Stay informed: Keep up with the latest advancements in LLM technology and research. Follow blogs, attend conferences, and engage with the community.
Experiment and learn: Get hands-on experience with different LLMs and tools. Build your own projects and explore the possibilities.
Embrace the AI revolution: Don’t be afraid to embrace the power of LLMs. They can be incredibly valuable tools for enhancing your .NET applications and solving real-world problems.

The future of LLMs in .NET is bright! With the right tools and mindset, you can be a part of this exciting journey and build the next generation of intelligent applications.

Now, let’s bring it all together in the final section.

Go Build Awesome AI-Powered Applications!

So, there you have it! We’ve journeyed through the world of LLMs in .NET, from the basics of Microsoft.Extensions.AI to advanced techniques and real-world considerations.

Remember the key takeaways:

LLMs are powerful tools: They can enhance your .NET applications with capabilities like natural language understanding, code generation, and content creation.
Microsoft.Extensions.AI simplifies integration: This framework makes it surprisingly easy to connect your .NET projects to various LLM providers.
Customization is key: Don’t be afraid to go beyond the basics and tailor your LLM integration to your specific needs.
Security, performance, and ethics matter: Build responsible AI applications that are secure, efficient, and ethical.

Now it’s your turn! Go forth and experiment, explore, and build amazing things with the power of LLMs. The possibilities are truly endless.

And if you’re looking for tools and resources to help you on your AI journey, don’t forget to check out ByteHide’s suite of AI-powered solutions. From securing your intellectual property and code with Shield to monitoring your application security in real-time with Monitor, ByteHide provides everything you need to build cutting-edge, secure, and efficient applications.

Top comments (0)