Unleashing GPT's Power in a Console App with Semantic Kernel and .NET
Artificial Intelligence is transforming the way we approach problem-solving, and with tools like GPT, it's becoming increasingly accessible even to beginners. In this article, we'll walk through the creation of a simple console application that leverages prompt engineering techniques to fulfill various requirements—essentially building a straightforward AI agent.
Why Build an AI Agent?
Imagine having a tool that can not only chat with you, providing insightful responses but also handle custom tasks with tailored prompts. By integrating GPT's capabilities with .NET using Semantic Kernel, you can create an AI agent that offers both chat interaction and the ability to process custom prompts, generating beautifully formatted HTML reports.
What This Application Does
The application we're building consists of two main features:
Chat Mode: This mode allows the user to engage in a conversation with ChatGPT via a command prompt. Once the chat session concludes, the entire chat history is saved as an HTML file on the user's machine, complete with clean formatting. This feature is perfect for developers who want to experiment with real-time AI interactions.
Custom Prompt Mode: For more specific tasks, the user can prepare a markdown document containing a custom prompt. The application then processes this prompt through ChatGPT and returns the results as an HTML page. This feature is ideal for generating detailed reports, summaries, or any other formatted output based on complex queries.
What You Need to Know
Before diving in, there are a few prerequisites:
Basic Programming Knowledge: Familiarity with C# and .NET is essential, as this forms the backbone of the application.
Helpful Background Knowledge: While not strictly necessary, understanding the basics of large language models like ChatGPT and some prompt engineering techniques will help you get the most out of this project.
Let's dive in and get started!
1) First, we'll build a menu-based user interface to enhance usability.
Program.cs:
public class Program
{
public static async Task Main(string[] args)
{
Console.WriteLine("======== Chatgpt Assistant ========");
while (true)
{
DisplayMenu();
// Get user input
Console.Write("Enter your choice (1-4, or 'x' to exit): ");
string? userInput = Console.ReadLine();
// Check if the user wants to exit
if (userInput?.ToLower() == "x")
{
break;
}
// Process user input
ProcessUserInput(userInput);
}
}
private static void DisplayMenu()
{
Console.WriteLine("Menu:");
Console.WriteLine("1. Chat Mode");
Console.WriteLine("2. Run Custsom Prompt");
Console.WriteLine("\nX. Exit");
Console.WriteLine();
Console.WriteLine();
}
private static void ProcessUserInput(string? userInput)
{
switch (userInput)
{
case "1":
Console.WriteLine("You selected: Chat Mode");
break;
case "2":
Console.WriteLine("You selected: Custom Prompt");
break;
default:
Console.WriteLine("Invalid choice. Please enter a number between 1 and 4, or 'X' to exit.");
break;
}
Console.WriteLine();
}
}
2) Define a class to encapsulate the various settings used by the application.
AppSettings.cs:
namespace ChatgptAssistant
{
public class AppSettings
{
public string OpenAIModelID { get; set; }
public string OpenAIKey { get; set; }
public string PromptDirectory { get; set; }
public string ResultDirectory { get; set; }
public AppSettings(string openAIModelId, string openAIKey, string promptDir, string resultDirectory)
{
OpenAIModelID = openAIModelId;
OpenAIKey = openAIKey;
PromptDirectory = promptDir;
ResultDirectory = resultDirectory;
}
}
}
3) Now install the following packages via Nuget package manager.
- Microsoft.Extensions.Configuration by mircrosoft (9.0.0)
- Microsoft.Extensions.Configuration.Json by microsoft (9.0.0)
- System.Configuration.ConfigurationManager by microsoft (9.0.0)
- Microsoft.SemanticKernel by microsoft (1.2.0)
- System.CodeDom (8.0.0)
- Markdig by Alexandre Mutel (0.37.0)
4) Set up a JSON file to store API keys and other environment-specific data.
appsettings.json:
{
"OpenAIModelID": "model id",
"OpenAIKey": "api key",
"PromptDirectory": "prompt dir",
"ResultDirectory": "result dir"
}
Ensure that appsettings.json
is excluded from version control. Then, set the 'Copy to Output Directory' property to 'Copy Always'.
5) Visit the OpenAI portal (https://platform.openai.com/api-keys) to generate your API key.
6) Update the appsettings.json
file with your API key and the ID of your preferred model."
You can find available models from here
As of this writing, gpt-3.5-turbo-16k is available and will be used in this tutorial. Feel free to switch to any model you prefer, keeping in mind that each model has its own set of advantages and disadvantages. You can explore more about these models here
updated appsettings.json:
{
"OpenAIModelID": "gpt-3.5-turbo-16k",
"OpenAIKey": "<key here>",
"PromptDirectory": "prompt dir",
"ResultDirectory": "result dir"
}
7) Next, we'll initiate a simple chat with a few messages. The ChatHistory
class from the Semantic Kernel handles most of the heavy lifting for us. We will use our own method to print the messages to the console."
ChatServiceUtils.cs:
using Microsoft.SemanticKernel.ChatCompletion;
namespace ChatgptAssistant;
public class ChatServiceUtils
{
public static async Task StartChatAsync(IChatCompletionService chatGPT)
{
Console.WriteLine("Chat content:");
Console.WriteLine("------------------------");
var chatHistory = new ChatHistory("You are a librarian, expert about books");
// First user message
chatHistory.AddUserMessage("Hi, I'm looking for book suggestions");
await MessageOutputAsync(chatHistory);
// First bot assistant message
var reply = await chatGPT.GetChatMessageContentAsync(chatHistory);
chatHistory.Add(reply);
await MessageOutputAsync(chatHistory);
// Second user message
chatHistory.AddUserMessage("I love history and philosophy, I'd like to learn something new about Greece, any suggestion");
await MessageOutputAsync(chatHistory);
// Second bot assistant message
reply = await chatGPT.GetChatMessageContentAsync(chatHistory);
chatHistory.Add(reply);
await MessageOutputAsync(chatHistory);
}
private static Task MessageOutputAsync(ChatHistory chatHistory)
{
var message = chatHistory.Last();
Console.WriteLine($"{message.Role}: {message.Content}");
Console.WriteLine("------------------------");
return Task.CompletedTask;
}
}
Updated Program.cs:
using ChatgptAssistant;
using Microsoft.SemanticKernel.Connectors.OpenAI;
public class Program
{
public static async Task Main(string[] args)
{
Console.WriteLine("======== Chatgpt Assistant ========");
AppSettings appSettings = Utils.GetAppSettings();
OpenAIChatCompletionService chatCompletionService = new(appSettings.OpenAIModelID, appSettings.OpenAIKey);
while (true)
{
DisplayMenu(appSettings);
// Get user input
Console.Write("Enter your choice (1-4, or 'x' to exit): ");
string? userInput = Console.ReadLine();
// Check if the user wants to exit
if (userInput?.ToLower() == "x")
{
break;
}
// Process user input
await ProcessUserInput(userInput, chatCompletionService);
}
}
private static void DisplayMenu(AppSettings appSettings)
{
Console.WriteLine("Menu:");
Console.WriteLine("1. Chat Mode");
Console.WriteLine("2. Run Custsom Prompt");
Console.WriteLine("\nX. Exit");
Console.WriteLine();
Console.WriteLine("Prompt Directory:\t" + appSettings.PromptDirectory);
Console.WriteLine("Result Directory:\t" + appSettings.ResultDirectory);
Console.WriteLine("ChatGPT Model:\t\t" + appSettings.OpenAIModelID);
Console.WriteLine();
}
private static async Task ProcessUserInput(string? userInput, OpenAIChatCompletionService chatService)
{
switch (userInput)
{
case "1":
Console.WriteLine("You selected: Chat Mode");
await ChatServiceUtils.StartChatAsync(chatService);
break;
case "2":
Console.WriteLine("You selected: Custom Prompt");
break;
default:
Console.WriteLine("Invalid choice. Please enter a number between 1 and 4, or 'X' to exit.");
break;
}
Console.WriteLine();
}
}
Then run the project.
If you see results similar to the screenshots below, you're on the right track. If not, you'll need to troubleshoot and resolve any issues related to this basic step.
8) Next, we'll enhance the application to allow the chat to continue until the user decides to end it. This can be achieved using a simple while loop. Additionally, we'll refactor the message printing method by moving it to a utility class and making it public for better reusability.
ChatServiceUtils.cs:
public static async Task StartChatAsync(IChatCompletionService chatGPT)
{
try
{
var chatHistory = new ChatHistory("You are a helpful assistant.");
while (true)
{
Console.WriteLine("Your message: ");
string? msg = Console.ReadLine();
// Check if the user wants to exit
if (msg?.ToLower() == "exit")
{
chatHistory.AddUserMessage("Generate a title for this chat session.");
var reply = await chatGPT.GetChatMessageContentAsync(chatHistory);
chatHistory.Add(reply);
break;
}
if (!string.IsNullOrWhiteSpace(msg))
{
// User sends msg
chatHistory.AddUserMessage(msg);
// GPT sends reply
var reply = await chatGPT.GetChatMessageContentAsync(chatHistory);
chatHistory.Add(reply);
await Utils.MessageOutputAsync(chatHistory);
}
}
}
catch (Exception ex)
{
Console.WriteLine(ex.ToString());
}
}
Utils.cs:
public static Task MessageOutputAsync(ChatHistory chatHistory)
{
var message = chatHistory.Last();
if (message.Role == AuthorRole.Assistant)
{
Console.ForegroundColor = ConsoleColor.Yellow;
}
Console.WriteLine($"{message.Role}:\n{message.Content}");
Console.ResetColor();
Console.WriteLine("------------------------");
return Task.CompletedTask;
}
Then run the project again. Now, ChatGPT's messages are displayed in yellow.
✅ Great job! Now you have simple chat application powered by ChatGPT.
9) Now comes the challenging part: saving the chat results as a well-formatted HTML page. We can break this task into several smaller steps:
- Generate a Suitable Title: Create an appropriate title for the chat context.
- Create Template File and Partial Class: Develop a template file and a partial class to handle HTML file generation.
- Iterate Through Messages: Go through each message in the chat history, creating a list of messages with their respective roles (user or assistant).
- Remove Final Messages: Exclude the last two messages from the list, as they are related to title generation.
- Generate HTML File: Pass the list of messages to a T4 template class to produce the HTML file.
- Save and Open HTML File: Save the HTML file to disk and automatically open it in the default web browser.
10) Since we've already integrated title generation into our StartChatAsync method, we can proceed with the remaining steps for saving and formatting the chat results as an HTML page
Update the title generation prompt to include the filename directly, so you can use it to save the files without additional processing
ChatServiceUtils.cs > StartChatAsync:
while (true)
{
Console.WriteLine("Your message: ");
string? msg = Console.ReadLine();
// Check if the user wants to exit
if (msg?.ToLower() == "exit")
{
chatHistory.AddUserMessage("Generate a title for this chat session. Title should be able to use as a file name in windows OS. Don't use - and _ to seperate words. But spaces");
var reply = await chatCompletionService.GetChatMessageContentAsync(chatHistory);
chatHistory.Add(reply);
break;
}
if (!string.IsNullOrWhiteSpace(msg))
{
// User sends msg
chatHistory.AddUserMessage(msg);
// GPT sends reply
var reply = await chatCompletionService.GetChatMessageContentAsync(chatHistory);
chatHistory.Add(reply);
await Utils.MessageOutputAsync(chatHistory);
}
}
11) Create a directory named "Templates" and create a runtime text template file inside it.
ChatSessionTemplate.cs
is a partial class, meaning it can have another class associated with the same name. Note that this class is automatically generated by the framework and should not be modified directly. Instead, we can implement our custom logic in a separate partial class with the same name, appending a Partial suffix to easily identify and modify it when needed
12) Create a new class with the name ChatSessionTemplatePartial
to extend the existing partial class and implement your logic.
ChatSessionTemplatePartial.cs:
public partial class ChatSessionTemplate
{
public Dictionary<Guid, ChatBubble>? ChatHistory { get; set; }
public string? Title { get; set; }
}
public class ChatBubble
{
public string? Role { get; set; }
public string? Message { get; set; }
}
Remember to use the partial keyword in your new class definition. The ChatSessionTemplate.tt
file is the template used to generate the final HTML output based on the logic implemented in the ChatSessionTemplatePartial.cs
class. This .tt
file contains both static and dynamic content for rendering the results
ChatSessionTemplate.tt:
<#@ output extension=".html" #>
<#@ import namespace="ChatgptAssistant.Templates"#>
<#@ import namespace="Microsoft.SemanticKernel"#>
<!DOCTYPE html>
<html lang = "en" >
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>
<#= Title #>_Chat
</title>
<style>
/* Your CSS styles here */
/* Reset default browser styles */
body, h1, h2, h3, p {
margin: 0;
padding: 0;
}
/* Set body font and line height */
body {
font-family: 'Segoe UI', Tahoma, Geneva, Verdana, sans-serif;
line-height: 1.6;
background-color: #f9f9f9;
color: #333;
padding: 0 5em;
}
footer {
color: #979797;
font-size: 12px;
border-top: 1px solid #ddd;
text-align: center;
}
.markdown-content {
background-color: #fff;
padding: 20px;
min-width: 1080px;
margin: 0 auto;
box-shadow: 0 5px 5px #ddd;
display: flex;
flex-direction: column;
max-width: 65vw;
}
code {
color: #333535;
background-color: #f5faff;
display: inline-block;
padding: .3em .5em;
border-radius: 5px;
border: 1px dashed #a5ccee;
}
/* Headings */
h1, h2, h3 {
color: #4473a6; /* Blue headings */
margin-bottom: 10px;
}
/* Paragraphs */
p {
margin-bottom: 15px;
}
/* Links */
a {
color: #007bff; /* Blue links */
text-decoration: none;
}
a:hover {
text-decoration: underline;
}
/* Lists */
ul, ol {
margin-bottom: 15px;
}
/* Table */
table {
border-collapse: collapse;
width: 100%;
}
th, td {
padding: 8px;
text-align: left;
border-bottom: 1px solid #ddd; /* Light gray bottom border */
}
th {
background-color: #007bff; /* Blue header background */
color: #fff; /* White text color */
}
.user.bubble {
align-self: flex-end;
max-width: 85%;
display: flex;
flex-direction: column;
align-items: flex-end;
padding: 5px 12px;
background: aliceblue;
border-radius: 5px;
}
.assistant.bubble {
align-self: flex-start;
max-width: 80%;
display: flex;
flex-direction: column;
align-items: flex-start;
padding: 5px 12px;
background: #e4f2ff;
margin: 20px 0;
border-radius: 5px;
}
.assistant.roleTitle {
font-weight: bold;
text-transform: capitalize;
color: #0d66b5;
}
.user.roleTitle {
font-weight: bold;
text-transform: capitalize;
color: #5b5b5b;
}
/* Add more styles as needed */
</style>
</head>
<body>
<div Class="markdown-content">
<h2 style = "text-align: center;" >
<#=Title #>
</h1>
<#
foreach (KeyValuePair<Guid, ChatgptAssistant.Templates.ChatBubble> kvp in ChatHistory)
{
#>
<div Class="<#= kvp.Value?.Role == "user" ? "user" : "assistant" #> bubble">
<div Class="<#= kvp.Value?.Role == "user" ? "user" : "assistant" #> roleTitle">
<#= kvp.Value?.Role ?? "" #>
</div>
<div Class="msgContent"> <#= kvp.Value?.Message ?? "" #></div>
</div>
<#
}
#>
<footer>
Generated With ChatGPT API, Semantic library And .NET
</footer>
</div>
</body>
</html>
I will explain the structure of the template.
First, we need to include these directives to correctly import our classes and specify the file extension for the generated output (HTML in our case):
<#@ output extension=".html" #>
<#@ import namespace="ChatgptAssistant.Templates"#>
<#@ import namespace="Microsoft.SemanticKernel"#>
All dynamic content within the template is enclosed in <# #> tags. For example:
<title>
<#= Title #>_Chat
</title>
When we instantiate our template class, we pass the Title value to dynamically generate the appropriate content.
13) We pass all the messages from our chat session as a dictionary. The for loop in the template is responsible for generating the HTML content related to displaying these messages.
<#
foreach (KeyValuePair<Guid, ChatgptAssistant.Templates.ChatBubble> kvp in ChatHistory)
{
#>
<div Class="<#= kvp.Value?.Role == "user" ? "user" : "assistant" #> bubble">
<div Class="<#= kvp.Value?.Role == "user" ? "user" : "assistant" #> roleTitle">
<#= kvp.Value?.Role ?? "" #>
</div>
<div Class="msgContent"> <#= kvp.Value?.Message ?? "" #></div>
</div>
<#
}
#>
The rest of the .tt
file contains static content, which won’t change. Feel free to edit and experiment with these static parts, such as changing styles. The primary goal is to present our messages in a clean and attractive way. To be honest, most of these styles were generated by ChatGPT—who enjoys writing CSS manually these days, right?
Here’s ChatServiceUtils.cs
after incorporating the new modifications:
using ChatgptAssistant.Templates;
using Markdig;
using Microsoft.SemanticKernel.ChatCompletion;
namespace ChatgptAssistant;
public class ChatServiceUtils
{
public static async Task StartChatAsync(IChatCompletionService chatGPT)
{
try
{
var chatHistory = new ChatHistory("You are a helpful assistant.");
while (true)
{
Console.WriteLine("Your message: ");
string? msg = Console.ReadLine();
// Check if the user wants to exit
if (msg?.ToLower() == "exit")
{
// Instruct ChatGPT to create a title for the chat
chatHistory.AddUserMessage("Generate a title for this chat session. Title should be able to use as a file name in windows OS. Don't use - and _ to seperate words. But spaces");
var reply = await chatGPT.GetChatMessageContentAsync(chatHistory);
chatHistory.Add(reply);
break;
}
if (!string.IsNullOrWhiteSpace(msg))
{
// User sends msg
chatHistory.AddUserMessage(msg);
// GPT sends reply
var reply = await chatGPT.GetChatMessageContentAsync(chatHistory);
chatHistory.Add(reply);
await Utils.MessageOutputAsync(chatHistory);
}
}
// We use a dictionary to keep a track of messages sent by each role.
var messageDictionary = new Dictionary<Guid, ChatBubble>();
var markDigPipeline = new MarkdownPipelineBuilder().UseAdvancedExtensions().Build();
foreach (var msg in chatHistory)
{
// Ignore initial system message
if (msg.Role == AuthorRole.System)
{
continue;
}
// ChatGPT sends responses in Markdown format. We can convert them into HTML using MarkDig library
var htmlContent = Markdown.ToHtml(msg.Content!, markDigPipeline);
messageDictionary.Add(Guid.NewGuid(), new ChatBubble() { Message = htmlContent, Role = msg.Role.ToString() });
}
// Remove title generation message and reply so that they are not included in html output.
var keysToRemove = messageDictionary.Keys.TakeLast(2).ToList();
foreach (var key in keysToRemove)
{
messageDictionary.Remove(key);
}
// Last response from Chatgpt is the title for the chat session.
var chatTitle = chatHistory.Last().Content?.Replace('"', ' ').Trim();
// Pass message dictionary and title to the template
var chatSessionTemplate = new ChatSessionTemplate
{
ChatHistory = messageDictionary,
Title = chatTitle
};
// We can get the result directory from appsettings.json. User can change the directory as he wishes.
var resultDirecotoryPath = Utils.GetAppSettings().ResultDirectory;
var htmlPath = $"{resultDirecotoryPath}/{chatTitle}.html";
File.WriteAllText(htmlPath, chatSessionTemplate.TransformText());
// Open the saved html file with default browser.
Utils.OpenFile(htmlPath);
}
catch (Exception ex)
{
Console.WriteLine(ex.ToString());
}
}
}
We use the chatHistory object to retrieve all the messages from the chat session and convert them into Markdown using the Markdown.ToHtml() method. We then populate our messageDictionary with essential details, including the message content and the role (user or ChatGPT).
// We use a dictionary to keep a track of messages sent by each role.
var messageDictionary = new Dictionary<Guid, ChatBubble>();
var markDigPipeline = new MarkdownPipelineBuilder().UseAdvancedExtensions().Build();
foreach (var msg in chatHistory)
{
// Ignore initial system message
if (msg.Role == AuthorRole.System)
{
continue;
}
// ChatGPT sends responses in Markdown format. We can convert them into HTML using MarkDig library
var htmlContent = Markdown.ToHtml(msg.Content!, markDigPipeline);
messageDictionary.Add(Guid.NewGuid(), new ChatBubble() { Message = htmlContent, Role = msg.Role.ToString() });
}
We need to remove the last two messages, which are related to title generation. We can retrieve the chat session title from the last message. After that, we pass the messageDictionary and the title to the template instance and write the content to the specified path. The Utils.OpenFile method automatically opens the generated HTML file in your default browser.
// Remove title generation message and reply so that they are not included in html output.
var keysToRemove = messageDictionary.Keys.TakeLast(2).ToList();
foreach (var key in keysToRemove)
{
messageDictionary.Remove(key);
}
// Last response from Chatgpt is the title for the chat session.
var chatTitle = chatHistory.Last().Content?.Replace('"', ' ').Trim();
// Pass message dictionary and title to the template
var chatSessionTemplate = new ChatSessionTemplate
{
ChatHistory = messageDictionary,
Title = chatTitle
};
// We can get the result directory from appsettings.json. User can change the directory as he wishes.
var resultDirecotoryPath = Utils.GetAppSettings().ResultDirectory;
var htmlPath = $"{resultDirecotoryPath}/{chatTitle}.html";
File.WriteAllText(htmlPath, chatSessionTemplate.TransformText());
// Open the saved html file with default browser.
Utils.OpenFile(htmlPath);
14) Update appsettings.json
to include a directory path for storing the HTML results.
"ResultDirectory": "C:\\Users\\<user>\\Desktop\\GptAssistant\\Results"
15) Add OpenFile method to Utils.cs:
public static void OpenFile(string path)
{
try
{
Process.Start(new ProcessStartInfo
{
FileName = path,
UseShellExecute = true
});
}
catch (Exception ex)
{
Console.WriteLine($"An error occurred while opening file: {ex.Message}");
}
}
If you follow the steps correctly, you should be able to chat with ChatGPT using the application. Type 'exit' to end the chat session.
Isn't that a cool application? Now you have the opportunity to enhance this application further with your creativity and requirements. You could port this implementation to a web or desktop application, creating a more refined user experience beyond the terminal interface.
Next, we’ll dive into the second part of our application: the custom prompt feature.
The basic idea for this feature is to allow you to submit prompts written in Markdown syntax through the application and save the results in an organized manner.
16) Update appsettings.json to include a directory path where you can store your prompts.
appsettings.json:
"PromptDirectory": "C:\\Users\\<user>\\Desktop\\GptAssistant\\Prompt",
Create a markdown file named 'CustomPrompt.md' and add your prompt here.
17) Add a new method inside ChatServiceUtils.cs
to handle custom prompts.
Utils.cs:
public static string ReadFile(string filePath)
{
try
{
return File.ReadAllText(filePath);
}
catch (Exception ex)
{
Console.WriteLine($"Error reading file: {ex.Message}");
return null;
}
}
18) Update Program.cs
and ChatServiceUtils.cs
.
Program.cs:
case "2":
Console.WriteLine("You selected: Custom Prompt");
await ChatServiceUtils.RunCustomPrompt(chatService);
break;
ChatServiceUtils.cs:
public static async Task RunCustomPrompt(IChatCompletionService chatGPT)
{
// Read the CustomePrompt.md
var promptPath = Path.Combine(Utils.GetAppSettings().PromptDirectory, "CustomPrompt.md");
var prompt = Utils.ReadFile(promptPath);
// Create the result directory with a uniqe name
var timeStamp = DateTimeOffset.UtcNow.ToString("yyyyMMddHHmmssfff");
var uniqueResultDirectory = Path.Combine(Utils.GetAppSettings().ResultDirectory, $"{timeStamp}");
Directory.CreateDirectory(uniqueResultDirectory);
// Create a copy of the input prompt
var outputPath = Path.Combine(uniqueResultDirectory, $"{timeStamp}_CustomPrompt.md");
File.WriteAllText(outputPath, prompt);
// Start the chat session
Console.WriteLine("Chat content:");
Console.WriteLine("------------------------");
// System message
var chatHistory = new ChatHistory("You are an helpful assistant");
// Submit the prompt to ChatGPT
chatHistory.AddUserMessage(prompt);
await Utils.MessageOutputAsync(chatHistory);
// Response from ChatGPT
var reply = await chatGPT.GetChatMessageContentAsync(chatHistory);
chatHistory.Add(reply);
await Utils.MessageOutputAsync(chatHistory);
}
If you run the app now, you should see the result for the submitted prompt when you select the custom prompt option.
19) Now, we need to generate a suitable title and use it along with a timestamp to create a more organized file structure. Then, we can generate the result HTML file just like we did earlier.
- Create a New Runtime Template: Define a new runtime template, PromptResultTemplate.cs.
- Create the Partial Class: Develop the partial class for this template to handle the custom logic and integration.
PromptResultTemplatePartial.cs:
namespace ChatgptAssistant.Templates
{
public partial class PromptResultTemplate
{
public string? Content { get; set; }
public string? Title { get; set; }
}
}
PromptResultTemplate.tt:
<#@ output extension=".html" #>
<#@ import namespace="ChatgptAssistant.Templates"#>
<!DOCTYPE html>
<html lang = "en" >
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>
<#= Title #>
</title>
<style>
/* Your CSS styles here */
/* Reset default browser styles */
body, h1, h2, h3, p {
margin: 0;
padding: 0;
}
/* Set body font and line height */
body {
font-family: 'Segoe UI', Tahoma, Geneva, Verdana, sans-serif;
line-height: 1.6;
background-color: #f9f9f9;
color: #333;
padding: 0 5em;
}
footer {
color: #979797;
font-size: 12px;
border-top: 1px solid #ddd;
text-align: center;
}
.markdown-content {
background-color: #fff;
padding: 20px;
min-width: 1080px;
margin: 0 auto;
box-shadow: 0 5px 5px #ddd;
}
code {
color: #333535;
background-color: #f5faff;
display: inline-block;
padding: .3em .5em;
border-radius: 5px;
border: 1px dashed #a5ccee;
}
/* Headings */
h1, h2, h3 {
color: #4473a6; /* Blue headings */
margin-bottom: 10px;
}
/* Paragraphs */
p {
margin-bottom: 15px;
}
/* Links */
a {
color: #007bff; /* Blue links */
text-decoration: none;
}
a:hover {
text-decoration: underline;
}
/* Lists */
ul, ol {
margin-bottom: 15px;
}
/* Table */
table {
border-collapse: collapse;
width: 100%;
}
th, td {
padding: 8px;
text-align: left;
border-bottom: 1px solid #ddd; /* Light gray bottom border */
}
th {
background-color: #007bff; /* Blue header background */
color: #fff; /* White text color */
}
/* Add more styles as needed */
</style>
</head>
<body>
<div Class="markdown-content">
<#= Content #>
<footer>
Generated With OpenAI ChatGPT API, Semantic library And .NET
</footer>
</div>
</body>
</html>
Modified ChatServiceUtils.cs:
public static async Task RunCustomPrompt(IChatCompletionService chatGPT)
{
/*
* Message Flow
*
* 0. System message
* 1. Prompt
* 2. Response for prompt
* 3. Title prompt
* 4. Response for title prompt
*/
// Read the CustomePrompt.md
var promptPath = Path.Combine(Utils.GetAppSettings().PromptDirectory, "CustomPrompt.md");
var prompt = Utils.ReadFile(promptPath);
// Start the chat session
Console.WriteLine("Chat content:");
Console.WriteLine("------------------------");
// System message
var chatHistory = new ChatHistory("You are an helpful assistant");
// Submit the prompt to ChatGPT
chatHistory.AddUserMessage(prompt);
await Utils.MessageOutputAsync(chatHistory);
// Response from ChatGPT
var reply = await chatGPT.GetChatMessageContentAsync(chatHistory);
chatHistory.Add(reply);
await Utils.MessageOutputAsync(chatHistory);
// Generate a title
chatHistory.AddUserMessage("Generate a title for this chat session. Title should be able to use as a file name in windows OS. Don't use - and _ to seperate words. But spaces");
reply = await chatGPT.GetChatMessageContentAsync(chatHistory);
chatHistory.Add(reply);
// Extract the title and reply for the prompt
var gptResult = chatHistory[2]!.Content;
var title = chatHistory[4]!.Content!.Trim();
// Create the result directory with a uniqe name
var timeStamp = DateTimeOffset.UtcNow.ToString("yyyyMMddHHmmssfff");
var uniqueResultDirectory = Path.Combine(Utils.GetAppSettings().ResultDirectory, $"{title}_{timeStamp}");
Directory.CreateDirectory(uniqueResultDirectory);
// Create a copy of the input prompt
var outputPath = Path.Combine(uniqueResultDirectory, $"{timeStamp}_CustomPrompt.md");
File.WriteAllText(outputPath, prompt);
// Generate html from markdown
var pipeline = new MarkdownPipelineBuilder().UseAdvancedExtensions().Build();
var htmlContent = Markdown.ToHtml(gptResult!, pipeline);
var promptResultTemplate = new PromptResultTemplate() { Content = htmlContent, Title = title };
var htmlResultPath = Path.Combine(uniqueResultDirectory, $"{title}_Result_{timeStamp}.html");
File.WriteAllText(htmlResultPath, promptResultTemplate.TransformText());
Utils.OpenFile(htmlResultPath);
}
20) Now, let’s test the custom prompt option by using a more complex prompt. You can utilize Markdown to organize and structure your prompt effectively.
Example:
# Prompt: Advanced Data Analysis and Visualization Task
## Task Description
You are a data scientist working for a healthcare company. Your task is to analyze a large dataset containing patient information, medical records, and treatment outcomes. The goal is to identify trends, patterns, and correlations that could help improve patient care and treatment effectiveness.
## Dataset Information
The dataset includes the following columns:
- **PatientID**: Unique identifier for each patient
- **Age**: Age of the patient
- **Gender**: Gender of the patient (M/F)
- **Diagnosis**: Primary diagnosis of the patient
- **Treatment**: Type of treatment administered
- **Outcome**: Treatment outcome (Success/Failure)
- **FollowUpDuration**: Duration of follow-up in months
- **Comorbidities**: Any additional medical conditions the patient has
- **Medications**: List of medications prescribed
## Tasks
1. **Data Cleaning and Preprocessing**:
- Handle missing values.
- Convert categorical data to numerical format if necessary.
- Normalize or standardize numerical data.
2. **Exploratory Data Analysis (EDA)**:
- Generate summary statistics for numerical columns.
- Create visualizations (e.g., histograms, bar charts, box plots) to understand the distribution of data.
- Identify any anomalies or outliers.
3. **Correlation Analysis**:
- Compute the correlation matrix for numerical variables.
- Identify significant correlations between different variables.
- Visualize the correlation matrix using a heatmap.
4. **Predictive Modeling**:
- Build a logistic regression model to predict treatment outcomes based on patient features.
- Evaluate the model using appropriate metrics (e.g., accuracy, precision, recall, F1-score).
- Perform cross-validation to ensure the model's robustness.
5. **Insights and Recommendations**:
- Summarize key findings from the analysis.
- Provide actionable insights that could help improve patient care.
- Recommend potential areas for further research.
## Deliverables
- **Code**: Provide well-documented code for data preprocessing, EDA, correlation analysis, and predictive modeling.
- **Report**: A detailed report summarizing your findings, including visualizations and statistical analysis.
- **Presentation**: A PowerPoint presentation highlighting the key insights and recommendations.
## Tools and Libraries
You may use the following tools and libraries:
- **Python**: For data analysis and modeling
- **Pandas**: For data manipulation
- **NumPy**: For numerical operations
- **Matplotlib/Seaborn**: For data visualization
- **Scikit-Learn**: For building predictive models
## Notes
- Ensure that your code is modular and reusable.
- Use comments and markdown cells in your Jupyter notebook to explain your thought process and findings.
- Pay attention to the clarity and readability of your visualizations.
# Write the python script
Results:
What are the benefits of this feature?
Normally when we use ChatGPT in web browser, we have a limited freedom of creating the most efficient prompt. Imagine your prompt is a lengthy one so you have to first type it somewhere else then copy to the small text box. Now let's say you have a prompt that you often use. But you change certain parts of it every time you submit. You will often switch between the browser tab and the editor. Now our idea is to write the prompt in one file, "prompt.md" and edit and submit this prompt through our simple application. The results coming from the ChatGPT is shown as a nicely formatted html page and stored along with the copy of our prompt. This way you will end up having a nicely organized history of your prompts and results.
Final thoughts
Since our aim is to be familiar with these tools and technologies to build an intelligent app that we can use to fulfil our requirements.
You can easily improve this option to handle much more complex scenarios such as batch processing of multiple prompts, generating programming code using chagpt and then save them to their respective file format using templates, or a far more complex application that serves as knowledge base for hundreds of different prompts and their results… you have no stop to your imagination.
Top comments (0)