https://www.google.com/url?sa=i&url=https%3A%2F%2Fmarmof.com%2Fblog%2Fthe-art-of-prompt-engineering-how-to-craft-effective-writing-prompts-for-ai-tools&psig=AOvVaw0Son4B68VcmwWhZP3xSef9&ust=1695198222699000&source=images&cd=vfe&opi=89978449&ved=0CBAQjRxqFwoTCIiv_9CftoEDFQAAAAAdAAAAABAE

Prompt Engineering — Not as fancy as it sounds :)

Ayasya Mamidala
7 min readSep 19, 2023

Recently, this is the buzz word that’s sounding on lately. Wondering what it is all about, took a course from DeepLearning.AI & other sources to throw some light on it.

Ok, we all know that GPT(Generative Pre-trained Transformer) is ruling now to give answers on whatever we ask.

Had a couple of doubts on when heard about it first, like most of us do.

  1. How it’s giving almost accurate responses to everything that’s asked?- Simply put, how’s it working?
  2. How to ask an AI model properly to get expected answer?

Let’s focus on 2nd question in this blog- That is nothing but “Prompt Engineering”

Once I was experimenting with ChatGPT, So asked “Why is user experience important for developing effective chatbot systems?” — It gave me a generalized response. But, expected a more sensible & accurate response.

Then got to know that, it does matter of how we exactly frame our question.

Remembered a quote:

“The most serious mistakes are not being made as a result of wrong answers. The true dangerous thing is asking the wrong question.” — Peter Drucker

We know that using Google search, Keywords play an important role to fetch required results efficiently. For example there’s a difference between these searches — How to setup & install windows 11 in laptop/pc & — Windows 11 installation.

Whereas, with Prompt engineering one must be precise & so elaborate along with right usage of keywords.

Let’s dive in more about it!

Before understanding what is prompt engineering, we have to understand some concepts. Don’t worry let’s just scratch the surface!

Natural Language Processing

Natural language processing (NLP) refers to the branch of computer science — and more specifically, the branch of AI — concerned with giving computers the ability to understand text and spoken words in much the same way human beings can.

https://nexocode.com/blog/posts/definitive-guide-to-nlp/

Large Language Models(LLM)

A large language model is a trained deep-learning model that understands and generates text in a human-like fashion. LLMs are part of NLP models.

https://vitalflux.com/wp-content/uploads/2023/04/Large-language-models-LLM-building-blocks.png

Types of LLMs:

  1. Base LLMs: Predicts next word based on training data
  2. Instruction Tuned LLM: Tries to follow instructions.
  • It’s a fine-tuned version of Base LLM, trained with more text data
  • fine-tuned on instructions & good attempts at following those instructions.
  • Further refined using RLHF: Reinforcement Learning using Human Feedback
  • Trained to be Helpful, Harmless & Honest — Does not give bad/toxic outputs
https://learn.deeplearning.ai/chatgpt-prompt-eng/lesson/1/introduction

GPT: Generative Pre-trained Transformers is a deep learning-based Large Language Model (LLM), utilizing a decoder-only architecture built on transformers.

  • Its purpose is to process text data and generate text output that resembles human language.

NOTE:

  • ChatGPT is used to demonstrate prompt engineering. These lines of code are used to get access for ChatGPT responses locally.
import openai
import os
from dotenv import load_dotenv, find_dotenv
_ = load_dotenv(find_dotenv()) # read local .env file
openai.api_key = os.getenv('OPENAI_API_KEY')
  • Create an Open AI account and replace ‘OPENAI_API_KEY’ with your key.
  • Below code is the function that takes user prompt as input & uses gpt-3.5 model to give the responses using ChatGPT, creating a chat like response.
def get_completion(prompt, model="gpt-3.5-turbo",temperature=0):
messages = [{"role": "user", "content": prompt}]
response = openai.ChatCompletion.create(
model=model,
messages=messages,
temperature=temperature, # this is the degree of randomness of the model's output
)
return response.choices[0].message["content"]
  1. At first glance this blog might look lengthy, but take your time and go through all the code snippet screenshots for better understanding. As, good things take time!
  2. No LLM/GPT models are used to write this blog. Credits to human brain :)

Principles of Prompting

Write Clear & Specific Instructions

  • A short prompt cannot be a clear prompt…Make it elaborative. Tactic 1: Use delimiters in prompt like:

Ex:

  • Using de-limiters will avoid prompt injection (User is allowed to add conflicting input in prompt & changes the model output rather than expected)
  • De-limiters will specify the model that any prompt given under it, is a separate section. Tactic 2: Ask for Structured Output
  • Makes output parsing easier
  • HTML or JSON format outputs might be clear to understand.

Tactic 3: Check whether conditions are satisfied, check assumptions required to do task.

  • Ex: Prompt to give response in form of instructions for a given text.

Tactic 4: Few-Shot Mapping — Give successful examples of completing tasks, then ask model to perform task.

  • Give an example prompt & explain the model how to behave/output format desired.

Give model time to think

  1. Tactic 1: Specify the steps to complete a task

Tactic 2: Instruct the model to workout it’s own solution, before rushing into a conclusion.

Model Limitations

Hallucination: Makes Statements that sound plausible, but not true.

Ex: If prompted about anything even superficial, it gives output as a real thing.

Reducing Hallucinations: Prompt like — First find relevant information, then answer the question.

Iterative Prompt Development

  • Keep on iterating prompts until it gives a best & desired result.
  • LLM models works on tokenizers, hence might not get best results at a very first prompt. Give prompts in different ways to get desired & sophisticated output
https://learn.deeplearning.ai/chatgpt-prompt-eng/lesson/1/introduction

Text Summarizing

  • Summarizes the given text in any given number of words.
  • Use Extract key word instead of Summarize to get exact gist of given text

Inferring

  • Infer GPT models to directly perform sentiment analysis
  • In Traditional ml models, for each task of prediction in sentiment analysis, a different model must be trained & tested.
  • Whereas, by inferencing GPT one can perform different kinds of sentiment analysis without training different models.
  • For ex- for a given user review of an online product, sentiment analysis can be performed on that text to understand 1> It’s tone, 2> It’s positive or negative, 3> List of emotions in given text & 4> Finding a specific emotion in given text etc.
  • Using traditional ML, we require different models for each operation whereas with GPT we can perform everything at a place.
https://learn.deeplearning.ai/chatgpt-prompt-eng/lesson/1/introduction
  • Topics present in given text or Specific topics can be found in a given text.

Transforming

  • Large Language Models for text transformation tasks such as
  • language translation,

Try to guess the format of output ->

  • Spelling and grammar checking,
  • Tone adjustment,
  • Format conversion.
https://learn.deeplearning.ai/chatgpt-prompt-eng/lesson/1/introduction
  • Real-time Example — We will generate customer service emails that are tailored to each customer’s review.

Temperature: Degree of randomness of the response given by LLM models. It will allow us to change the variety of responses given by the model.

https://learn.deeplearning.ai/chatgpt-prompt-eng/lesson/1/introduction
  • Modify temperature accordingly with the randomness required for the use-case.

Hope, this gives a brief clarity of what is Prompt Engineering. There is a lot of research going on & much more to explore and learn.

Imagination & Coding skills are the only limitations to generate enormous applications using Large Language Models & Prompt Engineering — Use responsibly!

--

--