Unleashing LLMs: Integrating ChatGPT & Hugging Face with Quarkus the LangChain Extension

Andreas Eberle
arconsis
Published in
4 min readDec 14, 2023

In the ever-evolving landscape of enterprise applications, the demand for intelligent and conversational interactions becomes more and more important. The ability to seamlessly integrate cutting-edge language models, such as ChatGPT and Hugging Face models, into your Quarkus services opens up a world of possibilities for creating innovative and engaging applications.

Enter the Quarkus LangChain extension, a groundbreaking addition to the Quarkus framework that empowers developers to effortlessly incorporate state-of-the-art language models (LLMs) into their projects. This extension not only simplifies the integration process but also leverages the speed, efficiency and ease of use for which Quarkus is renowned.

In this series, we’ll explore the potential of the Quarkus LangChain extension, delving into the steps to seamlessly integrate LLMs like ChatGPT and Hugging Face models into your Quarkus services. We’ll showcase how this awesome combination enables developers to push the boundaries of traditional enterprise applications, unlocking new use cases and fostering unparalleled user experiences.

Video Tutorial

Prefer learning through videos? We’ve got you covered. Alongside the detailed written guide, we’ve also prepared a video tutorial where I explain each step of the process. Whether you’re a visual learner or you prefer a hands-on demonstration, this video will complement the written content perfectly.

Integrating ChatGPT and Hugging Face with Quarkus | LangChain for Quarkus

Example Code: To help you follow along and dive deeper into the implementation details, I’ve prepared a GitHub repository with the complete source code to integrate Quarkus with ChatGPT and Hugging Face here: https://github.com/arconsis/quarkus-langchain-examples

Getting Started

To get started, just add one of the two dependencies to your project depening on if you want to use OpenAI or Hugging Face models.

implementation("io.quarkiverse.langchain4j:quarkus-langchain4j-openai:0.4.1")
implementation("io.quarkiverse.langchain4j:quarkus-langchain4j-hugging-face:0.4.1")

The dependencies above are for Gradle with Kotlin but of course you can also use them with Maven and Gradle with Groovy.

Keep in mind, because the Quarkus LangChain Extension is maintained in the Quarkiverse, it has a separate version you need to update manually. You can find the newest version on the Github release page.

Defining an AiService

In Quarkus, the integration with a LLM starts by defining an AiService interface. In this example, we will define an assistant that helps us to write peoms with a certain number of lines.

import dev.langchain4j.service.SystemMessage;
import dev.langchain4j.service.UserMessage;
import io.quarkiverse.langchain4j.RegisterAiService;

@RegisterAiService
public interface SimplePoemAiService {

@SystemMessage("You are a professional poet.")
@UserMessage("""
Write a poem about {topic}. The poem should be {lines} lines long.
""")
String writePoem(String topic, int lines);
}

An AiService is just defined as an interface, which is annotated with @RegisterAiService and has methods for the interactions you want to provide with an LLM. This is very similar to how you define a REST Client in Quarkus.

The methods are then annotated with two annotations: @SystemMessage and @UserMessage. The system message is sent as a first message to the LLM and can be used to define the scope and initial instructions for the model.

The user message, can be used to create the prompt, which is sent to the LLM and can be templated with the method’s parameters. In our example, we insert the topic and the number of lines the poem should have by using curly braces in the prompt. This templating mechanism can be used in the system and user message.

Configuring API Access

In Quarkus, we can configure our application in the application.properties file. Therefore, we need to configure the API Key we want to use to connect to OpenAI or Hugging Face in there like this:

# OpenAI
quarkus.langchain4j.openai.api-key=${OPEN_AI_API_KEY}


# or Hugging Face
quarkus.langchain4j.huggingface.api-key=${HUGGINGFACE_API_KEY}

You can now either create an environment variable to provide the API key (you only need the one for the extension you added) or replace the variable.

Calling the AiService

For our example, we will create a simple REST resource where we inject the SimplePoemAiService. As you can see, you can simply inject it like any other bean in Quarkus.

This resource creates a simple REST GET endpoint with the path /poem and two query parameters: topic and lines.

@Path("/poem")
@Produces(MediaType.TEXT_PLAIN)
public class PoemResource {

@Inject
SimplePoemAiService simplePoemAiService;

@GET
public String writeSimplePoem(@RestQuery String topic, @RestQuery int lines) {
return simplePoemAiService.writePoem(topic, lines);
}
}

Once you’ve started your Quarkus service, you can test it by calling the REST endpoint. E.g. using this curl statement

curl -L 'http://localhost:8080/poem?topic=Developers%20love%20Quarkus&lines=8'

And the result will look something like this:

In lines of code, their tales unfurled,
Developers love Quarkus, a cloud-native world,
With every command and each deploy,
It’s joy distilled, a coder’s toy.

Quarkus springs forth, light and brisk,
Uniting devs in a jubilant disk.
A dance of productivity at every turn,
In love they code, as Quarkus they learn.

Selecting a Model

By default Quarkus LangChain is currently using gpt-3.5-turbo (OpenAI) and tiiuae/falcon-7b-instruct (Hugging Face). If you want to change the used model you can simply add the following to your application.properties file.

# OpenAI
quarkus.langchain4j.openai.chat-model.model-name=gpt-4-1106-preview

# or Hugging Face
quarkus.langchain4j.huggingface.chat-model.inference-endpoint-url=https://api-inference.huggingface.co/models/google/flan-t5-small

Enable Chat Message Logging

To better understand what Quarkus is sending to the LLM and what’s coming back from them, you can enable logging for requests and responses. Again, just add the following lines to your application.properties.

# OpenAI
quarkus.langchain4j.openai.log-requests=true
quarkus.langchain4j.openai.log-responses=true

# or Hugging Face
quarkus.langchain4j.huggingface.log-requests=true
quarkus.langchain4j.huggingface.log-responses=true

For more details on all available properties, have a look into the Quarkus LangChain documentation.

Summary

With the Quarkus LangChain extension you can easily integrate your favorite LLMs into you Quarkus enterprise service and use it from anywhere within in your application by just inecting the AiService. With some simple annotations and configs you integrate with ChatGPT and Hugging Face in no time.

Stay tuned for the next parts of our Quarkus LangChain series! In the next parts, we will discuss how to handle chat message memory, create powerful agents that let ChatGPT call Quarkus functions and retrieval augmented generation (RAG).

--

--

Andreas Eberle
arconsis

Solutions Architect & Software Engineer @arconsis