Mastering Semantic Kernel Integration: Part1 Core Plugins

Siva Ramaswami
9 min readNov 18, 2023

--

Unless you’ve been living under a rock for the past eleven months, most likely, you have played with OpenAI’s ChatGPT. Since ChatGPT was introduced to the public towards the end of November 2022, this resourceful tool has been utilized for multiple purposes, including (but not limited to) getting answers for your questions, improving your writing skills, learning new languages, cranking out code, and generating poems and jokes, just to name a few.

Tools like ChatGPT and Google’s BERT among others are examples of what are called as LLMs (short for “large language model”)

While it is interesting to talk to these Large Language Models (LLM) like ChatGPT, the real value for app developers is the ability to integrate it into their business applications. Some of the best use cases one can conceive of include generating text, content moderation and summarization, spam filtering and sentiment analysis among others. LLMs like ChatGPT expose Application Programming Interfaces (APIs) to achieve these.

However, consuming the LLM APIs can be quite a challenging task. It puts a big onus on the developer to explicitly handle resources, allocate memory responsibly, and also implement any tools as they go through the integration process. This often results in overhead and technical debt that the app developer may consider to be an unneeded distraction from solving their core problem.

Luckily for application developers, powerful Software Development Kits (SDKs) are available to integrate LLMs into their business applications. Semantic Kernel (developed by Microsoft) and LangChain (developed by Harrison Chase) are the popular ones, which are also offered as open-source solutions

LangChain and Semantic Kernel are robust frameworks that facilitate business applications to be powered by the large language models. While they are similar in certain aspects, they have their own unique characteristics that can be tailored to specific requirements. LangChain is a powerful framework, built around a foundation of Python and JavaScript, offering a number of out-of-the-box tools and seamless integration options. In contrast, Semantic Kernel, on the other hand, is lightweight architecture, supporting not only Python but also C# and Java. Both have a great range of use cases and can be compelling toolkits for developers. As you may have guessed, the primary focus of this article, as well as several forthcoming pieces in this series, will be based on integrating with Semantic Kernel.

Semantic Kernel Defined:

Semantic Kernel is an open-source Software Development Kit (SDK) that facilitates combining AI services from providers like OpenAI, Azure OpenAI, and Hugging Face with conventional programming languages like C# and Python, resulting in AI apps that combine the best of both worlds.

What is the Kernel in Semantic Kernel?

Just as in an operating system, the kernel sits at the core of the operating system and manages resources, the semantic kernel orchestrates and manages various components that are crucial for the seamless execution of the code residing in an AI application. This orchestration involves managing configuration, services, plugins, and ensures that the native code and AI services work together to produce the desired outcome.

Semantic Kernel the Orchestrator

Semantic Kernel flow is depicted in the above diagram. Let us examine the steps:

  • Ask: This is the input user request to the semantic kernel. This could be a question the user can ask the LLM like ChatGPT.
  • Kernel: This component serves as the central control hub within the SDK, orchestrating the seamless integration and cohesion of other constituent elements, ultimately forming a comprehensive language application.
  • Skills (or Plugins): Skills have been recently renamed to “Plugins”. Plugins use prompts and configuration settings to establish communication with language models, and generate responses that can be integrated effectively into your application.
  • Memory — Ability of the system to retain and utilize knowledge or information from previous interactions or inputs.
  • Connectors — Enables your application to use external APIs to bring in data to enrich your application.
  • Planner — From the application’s code, you can have the semantic kernel to auto create the flow in order to solve a specific problem. Planner mixes and matches the plugins and connectors that are loaded into the kernel in order to create additional steps.
  • Get: Response that can be sent back to the user.

Tools needed to get started

  • VSCode or Visual Studio
  • An OpenAI key via either OpenAI or Azure OpenAI service
  • .Net 7 or .Net 8 SDK

Additionally you can install Jupyter notebook if you would like to run the samples given in the Microsoft Semantic Kernel documentation website but that is beyond the scope of this article. If you already don’t have an API key, head over to https://openai.com/blog/openai-api , sign up for an API key and save it off at a safe location.

There are a number of tutorials available on the internet to get started with integrating Semantic Kernel in your application using VS code. In this article I will be focusing on building using Visual Studio (VS) 2022. If you already do not have VS installed on your machine, you can download the free community edition from Microsoft at this url: https://visualstudio.microsoft.com/vs/community/

I will also be focusing on developing using the C# language and the windows operating system for the purposes of this article. We are also limiting ourselves (at least for now) to integrating with OpenAI as opposed to using Azure OpenAI

Now let us launch Visual Studio and click on “Create a new project”

In the “Create a new project” screen, select the Console template and click on “Next”

In the Configure your new project screen, name your project and browse to the location where you want it to be created. Then hit Next.

In the Additional Information screen, select .Net 8.0 (Standard Term Support) as the framework and hit “Create”

Now Visual studio will go ahead create the solution and project and also will generate some files and boiler plate code. The main file Program.cs file contains just a single line of code to print “Hello World” on the console.

Your screen will look something like this:

The next order of business would be to install a nuget package to use SemanticKernel in your application. From the Solution explorer, right click on the project and click on “Manage Nuget Package” and select the “Browse” tab and enter “SemanticKernel” at the search tab , make sure that we check the Include prerelease since SemanticKernel is still in pre release as of writing this article. Now you will see a bunch of nuget packages whose name starts with “Microsoft.SemanticKernel”. At the time of writing this article, SemanticKernel is beta5. Select this package from the list and click on “Install” as shown in the screenshot below:

Visual studio will display a window with “Preview Changes”, click on “Apply” .You may be prompted with a license acceptance dialog with additional packages that will also be installed into your project and you need to click on “I Accept” . Now the SemanticKernel package will be installed into your project. Now you are ready to start integrating with the semantic kernel.

If you now open the packages folder under your Dependencies project under your project, you will see that a package called “Microsoft.SemanticKernel” with version ending in beta5 as shown in the screenshot below:

In this part of the tutorial series, we are going to do the following:

  • Initializing kernel
  • Importing a couple of in-built functions (aka core plugins) as plugins from the kernel object instance returned by the initialization process we saw in the previous step.
  • For each one of these functions imported, we will be passing the resulting handle into an asynchronous function call to run it and output the results on the console.

Let’s proceed embarking on this journey:

Remove the generated greeting from Program.cs

As the first step outlined before, let us go ahead and add a function in Program.cs called Init which essentially returns a new instance of the kernel builder. Here is how the code from this step looks like:

The heart of the Semantic Kernel, the Kernel class has a static function called Builder which calls an Open AI service called WithOpenAIChatCompletionService. This service needs at least 2 parameters to be passed, one is the apiKey and the other the modelId. BTW, depending upon the beta version of the Semantic Kernel package installed, you may get a warning saying that the “KernelBuilder is obsolete”, and it may recommend you to use a different method. Ignore that for now, since SemanticKernel is a fast moving project and changes are taking place so quickly that it will be hard to keep up with the changes. So it is good to stay at a certain version unless the newer version offers additional goodness.

Now back to the parameters, modelId can be one of several models, in this case I have set it to be “gpt-3.5-turbo”. From Microsoft’s learn webpage on semantic kernel (https://learn.microsoft.com/en-us/semantic-kernel/ai-orchestration/kernel/adding-services?tabs=Csharp) you can see that the following models and their corresponding model types are available:

It is also not a best practice to leave your OpenAI API key in the open like shown in this screenshot. I have done it this way just for quick demonstration purposes. The key should be stored in a secure location. For a more best practice guidance, please refer to the following webpage:

https://help.openai.com/en/articles/5112595-best-practices-for-api-key-safety

Let’s now focus our attention to some of the out of the box core plugins that the Semantic Kernel Github repo has made available.

For a full list of such plugins checkout the following link:

https://learn.microsoft.com/en-us/semantic-kernel/ai-orchestration/plugins/out-of-the-box-plugins?tabs=Csharp

Let’s import one of those core plugins here called TimePlugin and print the current UTC date and Time. Here is the Program.cs code that accomplishes this

So, we are doing the following steps here:

  • Call the Init() method that was shown in the previous screenshot which returns a kernel object.
  • From the kernel object, call the inbuilt method ImportFunctions and pass an instance of the TimePlugin. This call returns a dictionary of various semantic functions available as part of the time object.
  • We now call the RunAsync method from the kernel object and pass to it as an argument the particular semantic function present in the dictionary object, in this case the UtcNow
  • Print the Utc Date Time value.

Running this from the visual studio yields the following result on the console.(you may get a different result based on when you run this).

Now let us go ahead and import one more core plugin , this time it is a plugin that summarizes a conversation between two people. Let us go ahead and add code to Program.cs right below where we added code previously to import and run the TimePlugin.

The ChatTranscript is defined as a string containing the conversation between two persons at the top of the program.cs file as shown here

Running the above code produces the output as shown below:

That’s all folks, look forward to hearing from you on what you think about this article. More features and more code on the semantic kernel will follow soon. Thanks for your time reading this! To see all the code for this tutorial please browse to the following URL:

https://github.com/sramaswami11/Master-Semantic-Kernel-Step-By-Step

--

--