Mastering Reliable Outputs with LLMP

Lukasz Kowejsza
3 min readOct 23, 2023

--

Every software developer knows the struggle: you’re using large language models (LLMs) in your projects, and while they’re great for text generation and creative tasks, they can be… well, a bit unpredictable when it comes to structured outputs. Crafting the perfect prompts, selecting the right examples, and tuning those pesky temperature and top_k parameters can be downright time-consuming. Not to mention, it breaks your development flow, leaving you frustrated and yearning for a simpler solution. Enter LLMP. Let’s take a deep dive into how it can make your life a whole lot easier. (Github: LLMP)

created with deepai.org

Introduction

As software developers, we’re always looking for tools and frameworks that make our lives easier. Enter LLMP. If you’ve ever grappled with the unpredictability of large language models in your projects, LLMP might just be the solution you’ve been waiting for. Let’s dive in and uncover its magic.

Understanding the LLMP Framework

At its core, LLMP (Large Language Model Programming) is a Python framework designed to bring simplicity and reliability to generative NLP tasks. But what sets it apart?

  • High-Level Abstraction: Instead of getting bogged down in the nitty-gritty of manual prompt crafting, LLMP offers high-level abstractions. This means developers can easily create, store, and optimize prompts for various jobs to seamlessly integrate into software applications.
  • Tailored for Structured Outputs: LLMP shines especially when tasks require structured outputs. Unlike other frameworks which rely on JSON formats and necessitate multiple calls to LLMs for the desired output, LLMP leverages YAML format. This not only reduces token usage but also is more intuitive for large language models.
  • Reliability with StructGenie: At the heart of LLMP’s validation mechanism is StructGenie, the underlying engine. This ensures robust type validation, and tasks can even incorporate rules such as multiple choice or loops in the output model.
  • Management and Optimization: LLMP isn’t just about creating prompts; it’s about managing them. From logging performance to storing prompts in a dedicated filesystem, the framework ensures developers can efficiently manage and optimize their LLM jobs.

Getting Started

When diving into the world of LLMP, one of the first things you’ll encounter is the initialization process. Instead of the traditional method of manually crafting prompts, LLMP offers a more streamlined approach through its Program class. Let's break down the steps:

1. Defining the input and output Model

  • Through a dictionary
  • Using a string template
  • As demonstrated in our example, with a Pydantic BaseModel
from typing import Literal
from pydantic import BaseModel

class InputObject(BaseModel):
book_title: str
book_author: str
release_year: int

class OutputObject(BaseModel):
genre: Literal["fiction", "non-fiction", "fantasy", "sci-fi", "romance", "thriller", "horror", "other"]

2. Give your Job a name

So now we can initialize a Job by giving a name and passing our input and output objects into Program

from llmp.services.program import Program

program = Program("Book to Genre", input_model=InputObject, output_model=OutputObject)

Now the magic begins under the hood:

a. Automatic Instruction Crafting: Once initialized, LLMP gets to work. It will analyze the provided input and output models to craft an appropriate instruction. If your task demands more specific directives, you can also supply your own instruction to guide the process.

b. Job Creation & Storage: After the instruction is crafted, LLMP creates a unique job, assigns it an ID, and saves it in a dedicated directory. This ingenious system allows for easy recall of the job for future tasks, just by referencing its ID or name.

3. Run your program by calling it with input data

Using the Initialized Job: With the job set up, you can now put it to use. By providing the necessary input data, LLMP will return the output in line with the defined models.

input_data={
"book_title": "The Lord of the Rings",
"book_author": "J. R. R. Tolkien",
"release_year": 1954
}
output = program(input_data=input_data)
print(output)
{'genre': 'fantasy'}

With LLMP taking care of the intricate details of prompt crafting and its structured approach to initialization, developers can hone in on their core tasks, liberated from the nuances of their tools.

Conclusion

The LLMP framework is revolutionizing the way developers use large language models in their applications. By providing a streamlined approach to defining tasks, ensuring reliable outputs, and offering robust management features, LLMP is a game-changer in the world of generative NLP.

In upcoming articles, we’ll delve deeper into LLMP’s advanced features, share practical use cases, and provide insights to help you get the most out of this powerful tool. Stay tuned!

--

--

Lukasz Kowejsza

AI enthusiast blending software dev & prompt engineering. Exploring large models through Python projects. Join my AI expert journey.