DEV Community

Cover image for Building a Multi-LLM Profanity Detector in C# using StepWise
Xiaoyun Zhang
Xiaoyun Zhang

Posted on

Building a Multi-LLM Profanity Detector in C# using StepWise

This post demonstrates how to create a robust profanity detection workflow using multiple Large Language Models (LLMs) in C# with the StepWise framework. The complete code is available in ProfanityDetector.cs. You can visit this blog post to learn more about StepWise.

Profanity Detector Overview

The profanity detection workflow consists of the following steps:

  1. Get input text from the user
  2. Send the text to three different AI models for analysis
  3. Collect votes from all models
  4. Make a final determination based on majority voting

What makes this workflow interesting is that it runs multiple AI models in parallel and uses a voting system to increase reliability. Some models might be more sensitive to certain types of content than others, so using multiple models helps provide a more balanced assessment.

You can try-out this workfow at the online demo by clicking the below button

Image description

Create a new StepWise project

To create a new StepWise project, you can use the StepWise template provided by StepWise. You can create a new StepWise project using the following command from dotnet-cli:

// install the StepWise template
dotnet new -i LittleLittleCloud.StepWise.Template

// create a new StepWise project
dotnet new stepwise-console -n PrepareDinnerProject
Enter fullscreen mode Exit fullscreen mode

Define the Workflow

In StepWise, we'll define our workflow as a C# class where each step is represented by a public async method with the Step attribute. Let's start by creating the ProfanityDetector class:

public class ProfanityDetector
{
    private readonly ChatClient _chatClient;

    public ProfanityDetector()
    {
        // Here we use deepseek v3 api for LLM inference because it's cost-effective.
        // You can also replace it with other LLM providers like OpenAI, Claude, etc
        _chatClient = ChatClientProvider.Instance.CreateDeepSeekV3();
    }

    [Step(description: """
        ## ProfanityDetector
        This workflow helps you determine if a given text contains rude or profane language.
        """)]
    public async Task<string> README()
    {
        return "README is Completed";
    }
}
Enter fullscreen mode Exit fullscreen mode

Adding User Input

The first step is to get input from the user. We'll use StepWise's UI input capability:

[StepWiseUITextInput(description: "Please provide the text you want to check")]
public async Task<string?> Input()
{
    return null;
}
Enter fullscreen mode Exit fullscreen mode

Implementing AI Analysis

Next, we'll add three parallel AI analysis steps. Each step will use the same LLM but with independent executions to provide different perspectives:

[Step(description: "Detect profane language using AI1")]
[DependOn(nameof(Input))]
public async Task<bool> AI1(
    [FromStep(nameof(Input))] string input)
{
    var prompt = $"""
        Please determine if the following text contains rude or profane language, 
        or is inappropriate to be shared in public:
        {input}

        Answer with 'yes' or 'no'
        """;

    var systemMessage = new SystemChatMessage(prompt);
    var response = await _chatClient.CompleteChatAsync(systemMessage);

    return response.Value.Content[0].Text.ToLower().Contains("yes");
}
Enter fullscreen mode Exit fullscreen mode

The AI2 and AI3 methods follow the same pattern. Each AI step:

  • Depends on the Input step
  • Takes the input text as a parameter
  • Sends a prompt to the LLM
  • Returns a boolean indicating whether profanity was detected

Implementing the Voting System

Finally, we'll add the voting step that collects results from all three AI models and makes a final determination:

[Step(description: "Collect votes and determine the result")]
[DependOn(nameof(AI1))]
[DependOn(nameof(AI2))]
[DependOn(nameof(AI3))]
public async Task<string> Voting(
    [FromStep(nameof(AI1))] bool ai1,
    [FromStep(nameof(AI2))] bool ai2,
    [FromStep(nameof(AI3))] bool ai3)
{
    var votes = new[] { ai1, ai2, ai3 };
    var yesVotes = votes.Count(v => v);
    var noVotes = votes.Count(v => !v);

    return yesVotes > noVotes 
        ? "The text contains rude or profane language" 
        : "The text does not contain rude or profane language";
}
Enter fullscreen mode Exit fullscreen mode

The Voting step:

  • Depends on all three AI analysis steps
  • Takes the results as parameters
  • Uses simple majority voting to make the final determination
  • Returns a human-readable result string

Add ProfanityDetector workflow to StepWise server

After the ProfanityDetector workflow is defined, we can add it to the StepWise server using the StepWiseClient instance. This will allow the workflow to be executed in the StepWise UI.

// Program.cs
// ...configure the StepWise server
var stepWiseClient = host.Services.GetRequiredService<StepWiseClient>();
var workflow = new ProfanityDetector();
stepWiseClient.AddWorkflow(Workflow.CreateFromInstance(workflow));

// Wait for the host to shutdown
await host.WaitForShutdownAsync();
Enter fullscreen mode Exit fullscreen mode

The code above creates a StepWise server and adds the PrepareDinner workflow to the server. The server will be hosted on http://localhost:5123 by default. You can visit the URL to see the StepWise UI and execute the workflow.

Conclusion

This post demonstrated how to build a multi-LLM profanity detector in C# using the StepWise framework. By combining multiple AI models and using a voting system, we can create a more robust and reliable profanity detection workflow.

Top comments (0)