[Hands-On] Prompt-based Text Classification Using Large Language Models

Hugman Sangkeun Jung
11 min readJun 30, 2024

(You can find the Korean version of the post at this link.)

In our previous post, we explored the basic concepts and traditional methods of classification. This post will implement actual code to perform prompt-based classification using large language models for text classification and visualize the results.

What is Prompt-based Text Classification?

Prompt-based text classification is a zero-shot text classification method using Large Language Models (LLMs). It operates differently from traditional classification methods. Key features include:

  • Using prompts: Short texts that give specific instructions to the model.
  • Approaching as a language generation problem: Converts the classification problem into a language generation problem.
  • Flexibility: Can be applied to various classification tasks without being limited to a fixed number of classes.
  • Utilizing pre-trained models: Leverages the knowledge of large language models like GPT-3.5.

The core idea is to transform the classification task into a natural language question that the model can understand. For example, by throwing a specific classification target as a prompt and including a question like “What is the topic of this text?” as part of the prompt, the language model is made to directly ‘generate’ the classification result.

--

--

Hugman Sangkeun Jung

Hugman Sangkeun Jung is a professor at Chungnam National University, with expertise in AI, machine learning, NLP, and medical decision support.