Harness the power of GPT just by defining an interface

Minsoo Cheong
4 min readJun 11, 2023

--

Introducing interfAIce, an opensource automatic GPT powered proxy generation library with Kotlin, Java, Spring, Reactor and Coroutines support

Summary
My new library, interfAIce can do this:

check it out here: https://github.com/mscheong01/interfAIce

One of the greatest part of harnessing the power of LLMs to your application is that we can know receive natural languages as input and “understand” them. This has revolutionized the UI of countless software products. However, integrating it into pre-existing systems or applications can pose challenges, particularly because the output returned is also in natural language. While this is natural to a Natural Language Processing (NLP) model and may not pose a problem when the response is used in its entirety(ex: chatbots), the value of GPT extends beyond such limitations. GPT has demonstrated its potential as an early stage Artificial General Intelligence, implying that it can handle more complex tasks. This capacity, however, necessitates that the output conforms to a specified format that can be parsed and utilized effectively by other components of the application system.

Numerous GPT-enabled tools that rely on the model to determine specific user-driven actions incorporate this concept. However, this approach often involves a laborious process of meticulously crafting a prompt that delineates the desired output format and subsequently devising a parser for that format with each instance.

In response to this, I’m excited to present interfAIce, a newly born open-source library that introduces a fresh paradigm for integrating GPT into your system. In essence, interfAIce simplifies the integration process of Large Language Models (LLMs) into your system by requiring just a definition of an interface. This interface encapsulates a method that represents the tasks the LLM should perform. InterfAIce is also equipped with rich support for technologies commonly used in modern Java or Kotlin applications, thereby ensuring its suitability for production use.

Here’s an example:

output

As you can see, there are no limitations to how you define the interface. All you need to do is designate the method name, return type, and parameters in accordance with your unique use case. If additional context or explanation of the method’s behavior is necessary, feel free to include a description. Just like that, the auto generated proxy will function exactly as intended. It’s as simple as that!

interfAIce also supports the following features:

  • Spring integration

Any interface annotated with `@OpenAiInterface`will be automatically configured as a Spring Bean making it approachable from your Spring components.

  • Kotlin Coroutines & Java Reactor Support

non-blocking method definition using kotlinx.coroutines or the Reactor project is supported, ensuring that it is applicable to your asynchronous applications.

Under the hood, interfAIce creates Java proxies that leverage reflection and Jackson to implement a native transcoder that encodes method invocations into LLM requests and decodes the result into method responses.

Structure of interfAIce proxy

This initiative draws its inspiration from Backend-GPT, an innovative project that fully delegates the operation of a backend system to a Language Model (LLM). It offered a tantalizing glimpse into an era where Artificial General Intelligence (AGI) models could independently manage intricate systems. Nevertheless, at the current moment, LLMs do not possess the power to supplant an entire system. Their capabilities are more aptly applied to smaller, context-specific tasks.

This sparked the idea — rather than utilizing the LLM as the autonomous overarching system, we could begin by providing automaticity to the process of integrating it as a component in certain sections of the service where its capabilities are in demand. This train of thought lead to the creation of interfAIce, which facilitates the authoring of LLM task specifications in a manner akin to designing a service component: by defining an interface.

With today’s public launch of this project, my aspiration is to witness its evolution into a more robust, reliable tool for developers. I’m eager to see the introduction of new features, such as support for other LLM providers like Bard, or the inclusion of different models such as Stable Diffusion. Moreover, there is significant potential for enhancement in the prompts used to dictate the LLM’s response format. This aspect could greatly benefit from prompt engineering, an area where the collective wisdom of the open-source community can make a considerable contribution.

If you found this interesting, please feel free to check out our GitHub repository and begin contributing to its development.

--

--