In this post, we will integrate DeepSeek R1 into a .NET 9 using Semantic Kernel. If you’re looking to get started with DeepSeek models locally, this hands-on guide is for you.
What You Will Learn
- How to Get Started with DeepSeek R1
- How to Use Ollama for running local models
- How to install and start running the DeepSeek R1 model
- How to Use Semantic Kernel in C#
1. Prerequisites
- Visual Studio 2022+ (with .NET 9 SDK installed) .NET 9 is still in preview, so ensure that you have the preview SDK installed.
- Ollama (for managing and running local models)
- DeepSeek1.5b Model
2. Installing Ollama
Ollama is a platform or tool (specific details may vary depending on the context) that allows users to interact with large language models (LLMs) locally. It simplifies the process of deploying and running LLMs like LLaMA, Phi, DeepSeek R1, or other open-source models.
To install Ollama visit its official website https://ollama.com/download and install it on your machine.
3. Installing DeepSeek R1
DeepSeek's first-generation reasoning models with comparable performance to OpenAI-o1, including six dense models distilled from DeepSeek-R1 based on Llama and Qwen.
On the Ollama website click on Models and click on deepseek-r1 and choose 1.5b parameter option
Open Command Prompt and run the below command.
ollama run deepseek-r1:1.5b
It will download the model and start running automatically.
Once done, verify the model is available
ollama list
That’s it! We’re ready to integrate DeepSeek locally.
4. Creating .NET Console Application
- Launch Visual Studio
- Make sure .NET 9 is installed.
- Create a New Project
- File → New → Project…
- Pick Console App with .NET 9.
- Name Your Project
- e.g., DeepSeekDemoApp or any name you prefer.
- Target Framework Check
- Right-click on your project → Properties.
- Set Target Framework to .NET 9.
5. Integrating DeepSeek R1 with Semantic Kernel
While you could call DeepSeek via direct HTTP requests to Ollama, using Semantic Kernel offers a powerful abstraction for prompt engineering, orchestration, and more.
- Add Necessary NuGet Packages
<ItemGroup>
<PackageReference Include="Codeblaze.SemanticKernel.Connectors.Ollama" Version="1.3.1" />
<PackageReference Include="Microsoft.SemanticKernel" Version="1.35.0" />
</ItemGroup>
6. Complete code
The Semantic Kernel can use a custom connector to talk to local endpoints. For simplicity, we’ll outline a sample approach:
Program.cs:
using Codeblaze.SemanticKernel.Connectors.Ollama;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.SemanticKernel;
var builder = Kernel.CreateBuilder().AddOllamaChatCompletion("deepseek-r1:1.5b", "http://localhost:11434");
builder.Services.AddScoped<HttpClient>();
var kernel = builder.Build();
while (true)
{
string input = "";
Console.WriteLine("Ask anything to Deepseek");
input = Console.ReadLine();
var response = await kernel.InvokePromptAsync(input);
Console.WriteLine($"\nDeepseek: {response}\n");
}
7. Running & Testing
- Ensure Ollama is Running
-Some systems auto-run Ollama; otherwise, start it
ollama run
- Run Your .NET App
-Hit F5 (or Ctrl+F5) in Visual Studio.
-Watch for console output—
Support me!
If you found this guide helpful, make sure to check out the accompanying YouTube video tutorial where I walk you through the process visually. Don’t forget to subscribe to my channel for more amazing tutorials!
Feel free to leave your questions, comments, or suggestions below. Happy coding!
Top comments (1)
Thanks for sharing 🙏.