Slight Changes in Prompt Makes A Great Difference in LLM’s

Ramazanaltinisik
3 min readOct 14, 2023

--

Hello everyone, Today, I will explain why we should describe the output we want in detail in our prompts, which is important for working with LLM’s.

As we know, the large language model gives us an output based on what we just asked in our prompts.

If our prompt doesn’t cover the scope of criteria, we can’t get an output that satisfies our needs because we didn’t give them directions to follow.

In other words, the more general our input is, the more general our output and far from accurate answer is.

So our prompt must be precise and give LLM a framework to follow.

To be more clear, let’s check out this prompt example from PALM.

I was given a task from PALM to create a prompt that supports the hypothesis is true.

At first glance, it seems a bit complicated. But let me explain these lines:

In the first and second lines, the LLM (PaLM) makes a premise and hypothesis.

In the third line, It gave me a task to create a prompt to prove the hypothesis true.

Then I wrote my prompt with “few-shot example” and “Chain of Thought” models to prove the hypothesis as true.

In the last line, I took feedback from LLM on whether my prompt was effective for achieving the task( “prove the hypothesis as true”).

Everything seems usual at first, but the issue is:

"The feedback was not accurate and didn’t give me a detailed approach to my prompt”.

Then I decided to ask different LLM (chatGPT) to give me feedback

Here is what I get from chatGPT:

The output I get from chatGPT is more accurate and satisfies my need.

The reason behind getting a more detailed and accurate answer is not the quality of ChatGPT or PaLM.

It is because my prompt in ChatGPT was a bit detailed.

Here is the prompt I gave:

The prompt I gave to chatGPT is a little bit detailed

As you see, I didn’t just ask it to give me feedback; I also asked:

— Reasons behind the score

— Key performance indicators (clarity, relevance to the task) of evaluation

As a result, the output I got from chatGPT was far better from the one I get from PaLm.

Because I’ve added slight changes to my prompt to give ChatGPT a framework (reasons, KPI’s) to follow.

I hope this article gives you a sense of the importance of giving a bit of detailed prompt in working with large language models.

--

--

Ramazanaltinisik

The Person Who is passionate to be a self- tech learner