Text Generation v/s Text2Text Generation

Sharath S Hebbar
2 min readSep 27, 2023

--

Text Generation

Text Generation, also known as Causal Language Modeling, is the process of generating text that closely resembles human writing.

Text Generation using GPT-2

It utilizes a Decoder-only architecture and operates in a left-to-right context.

Text Generation is often employed for tasks such as sentence completion and generating the next lines of poetry when given a few lines as input.

Examples of Text Generation models include the GPT family, BLOOM, and PaLM, which find applications in Chatbots, Text Completion, and content generation.

Here’s a code representation of text generation using the HuggingFace pipeline.

from transformers import pipeline
task = "text-generation"
model_name = "gpt2"
max_output_length = 30
num_of_return_sequences = 2
input_text = "Hello, "
text_generator = pipeline(
task,
model = model_name)

text_generator(
input_text,
max_length=max_output_length,
num_return_sequences=num_of_return_sequences)

Text2Text Generation

Text-to-Text Generation, also known as Sequence-to-Sequence Modeling, is the process of converting one piece of text into another.

Text2Text Generation using T5

It relies on an encoder-decoder architecture and operates in both right-to-left and left-to-right contexts.

Text-to-text generation is frequently employed for tasks such as translating English sentences into French or summarizing lengthy paragraphs.

Examples of Text Generation models include T5 and BART, which are commonly used in question-answering, Translation, and Summarization tasks.

Here’s a code representation of Text-to-Text Generation using the HuggingFace pipeline.

from transformers import pipeline
task = "text2text-generation"
model_name = "google/flan-t5-base"
input_text = "Can you convert this text to French language: Hello, How are you"
text2text_generator = pipeline(
task,
model = model_name)

text2text_generator(input_text)

Reference: https://github.com/SharathHebbar/Transformers

--

--