Using the official OpenAI library for .NET to access local running LLMs ans SLMs

In this tutorial, we will use the official OpenAI library for .NET to create a simple console application. This application will implement a Shakespearean chat using either OpenAI’s models or locally running LLMs or SLMs with Ollama.

Sebastian Jensen
medialesson
6 min readJun 10, 2024

--

Introduction

I’ve already published another blog post demonstrating how to use the Azure.AI.OpenAI NuGet package to create a copilot using either OpenAI or Azure OpenAI. In this blog post, we will focus on the official OpenAI NuGet package, which is currently available in a preview version, to create our own copilot using OpenAI. Additionally, we will use the same code to chat with a local Large Language Model (LLM) or a local Small Language Model (SLM).

Preparation

Before you start, you need an OpenAI account and an OpenAI key. You can register by navigating to platform.openai.com.

If you want to run a local LLM or SLM, you need to install Ollama and download the desired model. I have already explained this process in another blog post.

Let’s code

First, open Visual Studio and create a new .NET console application using .NET8. Add the Spectre.Console NuGet package and the OpenAI NuGet package. Make sure to allow prerelease versions, because, at the time of writing this blog post, the OpenAI NuGet package is only available as a preview version (2.0.0-beta.3).

Next, create a new folder, called Utils in your solution. Within this folder, create a class named Statics.cs. This file will store the variables we will use later on. First, we define our two hosts: OpenAI and Local LLM. Next we list all available OpenAI models and finally we store some prompts as static strings.

internal static class Statics
{
public const string OpenAIKey
= "OpenAI";

public const string LocalLLMKey
= "Local LLM";


public static string GPT35TurboKey
= "gpt-3.5-turbo";

public static string GPT4Key
= "gpt-4";

public static string GPT4TurboKey
= "gpt-4-turbo";

public static string GPT4oKey
= "gpt-4o";


public static string OpenAIKeyPrompt
= $"Please insert your [yellow]{OpenAIKey}[/] API key:";

public static string LocalLLMNamePrompt
= "Please insert your [yellow]local LLM name[/]:";

public static string SystemMessage
= "You are William Shakespeare, the English playwright, poet " +
"and actor. Pretend to be William Shakespeare.";

public static string UserMessage
= "Please introduce yourself.";
}

Next, create another file in our Utils folder called ConsoleHelper.cs. This file will contain helpful methods for working with Spectre.Console. For example, it will handle createing the header, the logic for selecting the host, and obtaining the necessary properties to initialize all services accordingly.

internal static class ConsoleHelper
{
public static void ShowHeader()
{
AnsiConsole.Clear();

Grid grid = new();
grid.AddColumn();
grid.AddRow(new FigletText("OpenAI .NET Demo").Centered().Color(Color.Red));
grid.AddRow(Align.Center(new Panel("[red]Sample by Thomas Sebastian Jensen ([link]https://www.tsjdev-apps.de[/])[/]")));

AnsiConsole.Write(grid);
AnsiConsole.WriteLine();
}

public static string SelectFromOptions(List<string> options)
{
ShowHeader();

return AnsiConsole.Prompt(
new SelectionPrompt<string>()
.Title("Select from the following [yellow]options[/]?")
.AddChoices(options));
}

public static string GetString(string prompt)
{
ShowHeader();

return AnsiConsole.Prompt(
new TextPrompt<string>(prompt)
.PromptStyle("white")
.ValidationErrorMessage("[red]Invalid prompt[/]")
.Validate(prompt =>
{
if (prompt.Length < 3)
{
return ValidationResult.Error("[red]Value too short[/]");
}

if (prompt.Length > 200)
{
return ValidationResult.Error("[red]Value too long[/]");
}

return ValidationResult.Success();
}));
}

public static void WriteToConsole(string text)
{
AnsiConsole.Markup($"[white]{text}[/]");
}
}

Now, open the Program.cs file to start implementing the core logic of our Shakespeare copilot. The following code snippet contains the complete code for this file, but don’t worry, I will explain it afterwards.

// Show the header.
ConsoleHelper.ShowHeader();

// Select the host.
string host =
ConsoleHelper.SelectFromOptions(
[Statics.OpenAIKey, Statics.LocalLLMKey]);

// Initialize the client.
ChatClient? client = null;

// Switch on the host.
switch (host)
{
case Statics.OpenAIKey:

// Get the OpenAI API key.
string openAIApiKey =
ConsoleHelper.GetString(Statics.OpenAIKeyPrompt);

// Select the OpenAI model.
string openAIModel =
ConsoleHelper.SelectFromOptions(
[Statics.GPT35TurboKey, Statics.GPT4Key,
Statics.GPT4TurboKey, Statics.GPT4oKey]);

// Initialize the client.
client = new(openAIModel, new ApiKeyCredential(openAIApiKey));

break;

case Statics.LocalLLMKey:

// Set variables
string localApiKey = "ollama";
Uri localEndpoint = new("http://localhost:11434/v1");

// Get the local LLM name.
string localModel =
ConsoleHelper.GetString(Statics.LocalLLMNamePrompt);

// Initialize the client.
client =
new(localModel,
new ApiKeyCredential(localApiKey),
new OpenAIClientOptions { Endpoint = localEndpoint });

break;
}

// Check if the client is null.
if (client == null)
{
return;
}

// Show the header.
ConsoleHelper.ShowHeader();

// Set the options.
ChatCompletionOptions options = new()
{
MaxTokens = 1000,
Temperature = 0.7f,
};

// Set the messages.
List<ChatMessage> messages =
[
new SystemChatMessage(Statics.SystemMessage),
new UserChatMessage(Statics.UserMessage)
];

// Chat loop.
while (true)
{
AnsiConsole.WriteLine();
AnsiConsole.MarkupLine("[green]Shakespeare:[/]");

StringBuilder stringBuilder = new();

// Complete the chat.
AsyncResultCollection<StreamingChatCompletionUpdate> chatUpdates =
client.CompleteChatStreamingAsync(messages, options);

// Loop through the chat updates.
await foreach (StreamingChatCompletionUpdate chatUpdate in chatUpdates)
{
foreach (ChatMessageContentPart contentPart in
chatUpdate.ContentUpdate)
{
ConsoleHelper.WriteToConsole(contentPart.Text);
stringBuilder.Append(contentPart.Text);
}
}

ConsoleHelper.WriteToConsole(Environment.NewLine);
messages.Add(new AssistantChatMessage(stringBuilder.ToString()));

ConsoleHelper.WriteToConsole(
$"{Environment.NewLine}[green]User:[/]{Environment.NewLine}");

string? userMessage = Console.ReadLine();
messages.Add(new UserChatMessage(userMessage));
}

First, we need to ask the user for the desired host. Currently, the available options are OpenAI and Local LLM. Depending on the selection, we either ask for the OpenAI key and the desired OpenAI model, or we ask for the name of the local LLM or local SLM.

Next, we create the ChatCompletionOptions by setting the number of tokens and the temperature. We also define a list of ChatMessage objects, which will be used to send the instructions to our copilot. For now, it is a very simple system prompt, which can be improved in future releases. We then send a UserChatMessage, asking the copilot to introduce itself.

Finally, we start a while-true-loop loop to simulate the chat. We use the CompleteChatStreamingAsync method of our ChatClient to stream the copilot’s response. Once the response is complete, we save the message to our list of ChatMessage objects, ask the user for the next question, add it to the messages, and start again.

Screenshots

The first screenshot shows the host selection.

Depending on your selection, you will be asked for the required parameters. In this case we need to enter the OpenAI key.

Next we need to select the OpenAI model.

Finally you can start chatting with William Shakespeare.

If you select Local LLM as host, you need to specify the downloaded Ollama model.

In my case I’ve downloaded the famous Phi-3 model provided by Microsoft.

Finally you can start chatting with William Shakespeare.

Conclusion

In this blog post, I’ve demonstrated how to use the OpenAI NuGet package in a .NET console application to work with OpenAI or, with nearly no code changes, with a local LLM or SLM.

You can find the complete code in my GitHub repository.

--

--

Sebastian Jensen
medialesson

Senior Software Developer & Team Lead @ medialesson GmbH