Let’s compare the OpenAI models in C#

Recently OpenAI annouced GPT-4o, the next Large Language Model to power ChatGPT. Let’s write a simple .NET console application to compare all avavilable OpenAI chat models.

Sebastian Jensen
medialesson
6 min readMay 15, 2024

--

Introduction

During the Spring Update Stream of OpenAI last Monday, OpenAI introduced their new Large Language Model (LLM) with the name GPT-4o. The o stands for omni and this model is capable of answering question by using natural voice. But it is also capable as the other models to answer while writing.

OpenAI describes GPT-4o with the following quote on their website.

GPT-4o (“o” for “omni”) is a step towards much more natural human-computer interaction — it accepts as input any combination of text, audio, and image and generates any combination of text, audio, and image outputs. It can respond to audio inputs in as little as 232 milliseconds, with an average of 320 milliseconds, which is similar to human response time in a conversation. It matches GPT-4 Turbo performance on text in English and code, with significant improvement on text in non-English languages, while also being much faster and 50% cheaper in the API. GPT-4o is especially better at vision and audio understanding compared to existing models. — OpenAI

So I decided to write a simple console application using the Azure.AI.OpenAI NuGet package to use the chat completion API to chat with all available OpenAI models: GPT-3.5 Turbo, GPT-4, GPT-4 Turbo und GPT-4o. For simplicity I will just use the OpenAI models and not the Azure OpenAI models in this blog post.

Let’s code

I will create a simple .NET console application using Spectre.Console and Azure.AI.OpenAI NuGet packages. So let’s open Visual Studio and create new console application using .NET 8. Just add the two above mentioned NuGet packages.

We will start by creating a folder Models and in this folder the record class ModelResponse.cs. This record will store information, like the amount of prompt or completion tokens and the duration it tooks to make the OpenAI request.

namespace OpenAIModelsComparison.Models;

internal record ModelResponse(
string DeploymentName,
int PromptTokens,
int CompletionTokens,
TimeSpan Duration);

Now we will create another folder called Utils. In this folder we will add a new file called Statics.cs. This file contains all available models by using the corresponding model name.

namespace OpenAIModelsComparison.Utils;

internal static class Statics
{
public static string GPT35TurboKey = "gpt-3.5-turbo";
public static string GPT4Key = "gpt-4";
public static string GPT4TurboKey = "gpt-4-turbo";
public static string GPT4oKey = "gpt-4o";
}

Next we will create the class ConsoleHelper.cs in the Utils folder. This class will be used as a helper class for working with Spectre.Console. It provides a method to create a header, ask the user for a selection of models or getting a string from the console. It also provides a simple method to create a table to show the ModelResponse as a table in the console.

using OpenAIModelsComparison.Models;
using Spectre.Console;

namespace OpenAIModelsComparison.Utils;

internal class ConsoleHelper
{
public static void CreateHeader()
{
AnsiConsole.Clear();

Grid grid = new();
grid.AddColumn();
grid.AddRow(new FigletText("OpenAI Models").Centered().Color(Color.Red));
grid.AddRow(Align.Center(new Panel("[red]Sample by Thomas Sebastian Jensen ([link]https://www.tsjdev-apps.de[/])[/]")));

AnsiConsole.Write(grid);
AnsiConsole.WriteLine();
}

public static List<string> GetSelection(
string[] options)
{
CreateHeader();

List<string> models = AnsiConsole.Prompt(
new MultiSelectionPrompt<string>()
.Title("Please select from the [yellow]options[/].")
.Required()
.PageSize(10)
.InstructionsText(
"[grey](Press [yellow]<space>[/] to toggle your " +
"selection and [yellow]<enter>[/] to accept)[/]")
.AddChoices(options));

return models;
}

public static string GetString(
string prompt)
{
CreateHeader();

return AnsiConsole.Prompt(
new TextPrompt<string>(prompt)
.PromptStyle("white")
.ValidationErrorMessage("[red]Invalid prompt[/]")
.Validate(prompt =>
{
if (prompt.Length < 3)
{
return ValidationResult.Error("[red]Value too short[/]");
}

if (prompt.Length > 200)
{
return ValidationResult.Error("[red]Value too long[/]");
}

return ValidationResult.Success();
}));
}

public static void CreateOutputInfo(
ICollection<ModelResponse> modelResponses)
{
AnsiConsole.WriteLine();
AnsiConsole.WriteLine();

Table table = new();
table.Border(TableBorder.Ascii);
table.Expand();

table.AddColumn("Model Name");
table.AddColumn(new TableColumn("Prompt Tokens").Centered());
table.AddColumn(new TableColumn("Completions Tokens").Centered());
table.AddColumn(new TableColumn("Duration").Centered());

foreach (ModelResponse modelResponse in modelResponses)
{
table.AddRow(
modelResponse.DeploymentName,
modelResponse.PromptTokens.ToString(),
modelResponse.CompletionTokens.ToString(),
modelResponse.Duration.ToString());
}

AnsiConsole.Write(table);
AnsiConsole.WriteLine();
}
}

Now let’s open the Program.cs file to create the real logic. The program is pretty simple. First we ask the user to provide their OpenAI key, so that we will have access to the OpenAI API. Next we let the user decide which models should be used for comparison. The user needs to select at least one of the four available models, but it is also possible to select all four of them.

Next we will create the ChatCompletionsOptions for all selected models and we will also create our OpenAIClient. We will use a while-true loop to represent our chat. First we get the input from the user and then we will make the OpenAI calls using the selected models. We will select the ModelReponses and finally we will print out the comparison table, before accepting the next user question.

using Azure;
using Azure.AI.OpenAI;
using OpenAIModelsComparar.Models;
using OpenAIModelsComparar.Utils;
using Spectre.Console;
using System.Diagnostics;

// Create header
ConsoleHelper.CreateHeader();

// Get OpenAI Key
string openAIKey =
ConsoleHelper.GetString($"Please insert your [yellow]OpenAI[/] API key:");

// Get the models to compare
List<string> selectedModels =
ConsoleHelper.GetSelection(
[Statics.GPT35TurboKey, Statics.GPT4Key,
Statics.GPT4TurboKey, Statics.GPT4oKey]);

// Create OpenAI client
OpenAIClient client = new(openAIKey);

// Create header
ConsoleHelper.CreateHeader();

// Create ChatCompletionsOptions
List<ChatCompletionsOptions> chatCompletionsOptions = [];
foreach (string model in selectedModels)
{
chatCompletionsOptions.Add(CreateChatCompletionsOptions(model));
}

while (true)
{
AnsiConsole.WriteLine();
AnsiConsole.MarkupLine("[green]USER:[/]");

string? userMessage = Console.ReadLine();

List<ModelResponse> modelResponses = [];
foreach (ChatCompletionsOptions options in chatCompletionsOptions)
{
options.Messages.Add(new ChatRequestUserMessage(userMessage));
ModelResponse response = await HandleRequest(options);
modelResponses.Add(response);
}

ConsoleHelper.CreateOutputInfo(modelResponses);
}

async Task<ModelResponse> HandleRequest(ChatCompletionsOptions options)
{
AnsiConsole.WriteLine();
AnsiConsole.MarkupLine($"[green]{options.DeploymentName}:[/]");

Stopwatch stopwatch = new();
stopwatch.Start();

Response<ChatCompletions> chatCompletionsResponse =
await client.GetChatCompletionsAsync(options);

stopwatch.Stop();

string messageConent =
chatCompletionsResponse.Value.Choices[0].Message.Content;

AnsiConsole.WriteLine(
messageConent);

options.Messages.Add(
new ChatRequestAssistantMessage(
messageConent));

CompletionsUsage usageInfo = chatCompletionsResponse.Value.Usage;

return new ModelResponse(
options.DeploymentName,
usageInfo.PromptTokens,
usageInfo.CompletionTokens,
stopwatch.Elapsed);
}

static ChatCompletionsOptions CreateChatCompletionsOptions(
string deploymentName)
{
ChatCompletionsOptions chatCompletionsOptions = new()
{
MaxTokens = 1000,
Temperature = 0.7f,
DeploymentName = deploymentName,
};

chatCompletionsOptions.Messages.Add(new ChatRequestSystemMessage(
"You are a helpful AI assistant."));

return chatCompletionsOptions;
}

Screenshots

Let’s run the application and provide an OpenAI key. Next we select all available models.

We will start with a simple task by writing Introduce yourself. As you can see each model will answer accordingly, but if you take a look at the duration you will see that GPT-4o is the fasted model and also uses less prompt tokens, which represents the input.

Let’s test the knowledge by asking Why is the sky blue. As you can see GPT-4o answers accordingly by providing lots of information and is way faster that GPT-4 or GPT-4 Turbo.

Finally let’s compare GPT-4 Turbo and GPT-4o by asking for a Python script to calculate the Fibonacci number for a provided input number. You can see that GPT-4o is much faster in completing the task.

Conclusion

In this blog post I’ve shared a simple console application to compare all the current Large Language Models of OpenAI. It is pretty simple to change the used model in your request and get the most out of it. You can also easily adjust the code to work with Azure OpenAI models.

And the best: Each user (even with the free plan) will get access to GPT-4o, but the model will be published in waves. And also using the newly model will be cheaper compared to GPT-4 Turbo.

You will find the source code in my GitHub repository.

--

--

Sebastian Jensen
medialesson

Senior Software Developer & Team Lead @ medialesson GmbH