Prompt Engineering Tips for ChatGPT

Are you ready to take your conversations to the next level?

Ivan Campos
Sopmac AI
7 min readApr 24, 2023

--

As a large language model (LLM) interface, ChatGPT has the potential to generate impressive responses; however, the key to unlocking its true capabilities lies in Prompt Engineering.

In this post, we’ll reveal expert tips and techniques for crafting prompts that yield more accurate and relevant responses. Whether you’re using ChatGPT for customer service, content creation, or simply for fun, this article will provide you with the knowledge and tools to optimize your prompts with ChatGPT.

Cost Optimization

When considering advanced prompts, you can quickly find yourself unintentionally generating lengthy and resource-intensive prompts that may not be cost-effective.

While compressing prompt requests is still a very nascent field, a proven solution is to shrink your prompt responses.

Response Reduction

To reduce the lengths of your ChatGPT responses, include length or character limits inside your prompt (e.g. create a twitter post that is at most 280 characters).

Using a more generic approach, you could always append the following to your prompt:

“Respond as succinctly as possible.”

Prompting Terminology Simplified

Zero-shot: No examples provided

One-shot: One example provided

Few-shot: More than one example provided

Patterns

The best method of prompting ChatGPT to generate text depends on the specific task that you want the LLM to perform. If you are not sure which method to use, you can experiment with different methods to see which one works best for you. Below we will review 5 methods to begin your prompt experimentation journey.

Chain-of-Thought (CoT)

The chain-of-thought method involves providing ChatGPT with a few examples of intermediate reasoning steps that can be used to solve a particular problem.

“Chain-of-Thought Prompting Elicits Reasoning in Large Language Models” (Jan-10–2023)

Self-Ask

The self-ask method involves the model explicitly asking itself (and then answering) follow-up questions before answering the initial question.

“MEASURING AND NARROWING THE COMPOSITIONALITY GAP IN LANGUAGE MODELS” (Oct-07-2022)

Step-by-Step

The step-by-step method involves providing ChatGPT with the following instruction:

Let’s think step by step.

This technique has been shown to improve the performance of LLMs on a variety of reasoning tasks, including arithmetic, commonsense, and symbolic reasoning.

“Large Language Models are Zero-Shot Reasoners” (Jan-29-2023)

OpenAI has trained their GPT models with a human-in-the-loop approach via Reinforcement Learning with Human Feedback (RLHF); so, it makes sense that ChatGPT’s underlying model is aligned with the human-like approach of step-by-step thinking.

“Large Language Models are Zero-Shot Reasoners” (Jan-29-2023)

ReAct

The ReAct (Reason + Act) method involves combining reasoning traces and task-specific actions.

Reasoning traces help the model with planning and handling exceptions, while actions allow it to gather information from external sources like knowledge bases or environments.

“REACT: SYNERGIZING REASONING AND ACTING IN LANGUAGE MODELS” (Mar-10–2023)

Reflexion

Building on the ReAct pattern, the Reflexion method enhances the LLM by adding dynamic memory and self-reflection capabilities improving its reasoning trace and task-specific action choice abilities.

To achieve full automation, the authors of the reflexion paper introduced a simple but effective heuristic that allows the agent to identify hallucinations, prevent repetitive actions, and create an internal memory map of the environment in some cases.

“Reflexion: an autonomous agent with dynamic memory and self-reflection” (Mar-20-2023)

Now that we’ve introduced these 5 cutting-edge patterns, let’s look at a few anti-patterns related to prompt engineering.

Anti-Patterns

As corporations, like Samsung, have already learned…

Do not share private and/or sensitive information.

Understand that employees feeding proprietary code and financials into ChatGPT is just the beginning. Soon you will have Word, Excel, PowerPoint, and all of the most used corporate software be fully integrated with ChatGPT-like capabilities. Ensure that policies are in place before your data is fed into large language models like ChatGPT.

It is worth noting that OpenAI API data usage policy clearly states:

“By default, OpenAI will not use data submitted by customers via our API to train OpenAI models or improve OpenAI’s service offering.”

“OpenAI retains API data for 30 days for abuse and misuse monitoring purposes. A limited number of authorized OpenAI employees, as well as specialized third-party contractors that are subject to confidentiality and security obligations, can access this data solely to investigate and verify suspected abuse.”

Prompt Injection

Just as you would protect your database from SQL injection, be sure to protect any prompts that you expose to users from Prompt Injection.

Source: https://hackaday.com/2014/04/04/sql-injection-fools-speed-traps-and-clears-your-record/

By Prompt Injection, I am referring to a technique used to hijack a language model’s output by injecting malicious code into the prompt.

The first documented case of prompt injection was by Riley Goodside, where he simply prefixed the phrase

“Ignore the above directions”

and then supplied the intended action that intentionally circumvented any behavior with injected instructions.

Prompt Leaking

In a similar vein, not only can your intended prompt behavior be ignored, it can also be leaked.

“Ignore Previous Prompt: Attack Techniques For Language Models” (Nov-17–2022)

Prompt leaking is a security vulnerability where an attacker is able to extract the model’s own prompt — as was done shortly after Bing released their ChatGPT integration.

Source: https://twitter.com/kliu128/status/1623472922374574080

In a generic sense, prompt injection (goal hijacking) and prompt leaking can be pictured as follows:

“Ignore Previous Prompt: Attack Techniques For Language Models” (Nov-17–2022)

While there will always be bad actors who are looking to exploit any prompts that you expose, just like SQL Injection can be prevented with Prepared Statements, you can create defensive prompts to fight back against bad promptors.

The Sandwich Defense

One such technique is the Sandwich Defense where you “sandwich” the user’s input with your prompt goals.

https://learnprompting.org/docs/prompt_hacking/defensive_measures

Conclusion

ChatGPT responses are non-deterministic — meaning that even for the same prompt, the model can return different responses on different runs.

To handle the unpredictable nature of non-deterministic results, you can establish a zero or low temperature setting when using the OpenAI API.

Please feel free to experiment with the prompting tips in this article using the following interfaces. However, please keep in mind the non-deterministic nature of LLMs while exploring:

  • ChatGPT (ai.com): OpenAI’s public chatbot interface.
  • OpenAI Playground: Once you’ve signed up for an OpenAI API key, you can head over to OpenAI’s playground to test out your prompts and corresponding parameters, like temperature.
  • Vercel AI Playground: Free playground that allows you to compare the results of your prompts across multiple large language models — includes GPT-4 and Anthropic’s Claude, among others.
  • OpenAI API JavaScript Jumpstart (OpenAI API key required): A UI that I’ve open sourced that will give you full control of your OpenAI prompts, how they are rendered, and also calculates how much each prompt costs.

Resources

--

--

Ivan Campos
Sopmac AI

Exploring the potential of AI to revolutionize the way we live and work. Join me in discovering the future of tech