Six Essential Tips for Mastering Prompt Engineering in LLMs

Sahin Ahmed, Data Scientist
14 min readJul 11, 2024

--

Hey there, fellow AI enthusiasts and curious minds! Today, we’re diving into the fascinating world of prompt engineering for Large Language Models (LLMs). If you’ve ever chatted with an AI and thought, “Hmm, I wonder how I could get better answers,” then you’re in for a treat!

Brief explanation of prompt engineering and its importance

So, what exactly is prompt engineering? Well, think of it as the art and science of talking to AI in a way that gets you the best possible results. It’s like learning how to ask the right questions to your super-smart, but sometimes quirky, AI friend. And trust me, it’s more important than you might think!

You see, these LLMs are incredibly powerful, but they’re not mind readers (at least, not yet!). The quality of the output you get depends heavily on how you frame your input. That’s where prompt engineering comes in. It’s all about crafting your prompts — the questions or instructions you give to the AI — in a way that guides the model to give you accurate, relevant, and useful responses.

Get this right, and you’ll unlock the true potential of these AI marvels. Get it wrong, and well… let’s just say you might end up with some hilariously off-topic answers or, worse, unhelpful or inaccurate information.

Now, I know what you’re thinking: “Alright, I’m sold! But how do I actually do this prompt engineering thing?” Don’t worry, I’ve got you covered. In this post, we’re going to walk through 6 essential best practices that’ll turn you into a prompt engineering pro in no time.

The impact of well-crafted prompts on output quality

The quality of your prompt can make or break the AI’s response. It’s like the difference between asking a vague question to a human expert versus asking a well-thought-out, specific question. You’re going to get wildly different results!

A well-crafted prompt does several things:

  1. It focuses the AI’s attention on the right concepts. Remember those attention mechanisms we talked about? A good prompt makes sure they’re looking at the right things.
  2. It provides context. LLMs can generate more accurate and relevant responses when they have a clear context to work with.
  3. It sets the tone and style. Want a formal report or a casual explanation? Your prompt can guide this.
  4. It can steer the AI away from potential mistakes or biases. By being specific and including key instructions, you can help prevent the AI from going off track.

Let me give you a quick example. Say you want information about climate change. Compare these two prompts:

Prompt 1: “Tell me about climate change.” Prompt 2: “Provide a concise summary of the main causes and effects of climate change, focusing on scientific consensus from the past 5 years. Include 3–4 key statistics.”

The first prompt might give you a general, possibly overwhelming amount of information. The second is likely to produce a more focused, up-to-date, and data-driven response.

Best Practice 1: Be Clear and Specific

You know how sometimes you ask a friend a question, and they give you a long-winded answer about something completely different? Well, LLMs can be a bit like that friend if we’re not careful. So, let’s learn how to ask in a way that gets us exactly what we need!

A. Clearly stating objectives or questions

Think of this as setting the GPS for your AI journey. You wouldn’t just tell your GPS “Take me somewhere nice,” right? Same goes for your LLM prompts.

Instead of: “Tell me about dogs.” Try: “Explain the key differences between large and small dog breeds in terms of lifespan, exercise needs, and common health issues.”

See the difference? The second one gives the AI a clear destination. It knows exactly what you’re looking for and can tailor its response accordingly.

B. Providing necessary context

Context is king! It’s like giving your AI a pair of glasses so it can see the full picture.

Instead of: “What should I plant?” Try: “I live in a small apartment in Seattle with a north-facing balcony that gets about 3 hours of direct sunlight daily. What are some suitable plants I could grow there?”

By providing context, you’re helping the AI understand your specific situation, leading to more relevant and useful answers.

C. Specifying desired response format

This is like choosing the container for your AI-brewed information smoothie. Want it in a cup? A bowl? A fancy glass?

Instead of: “Tell me about the solar system.” Try: “Create a bulleted list of the planets in our solar system, ordered by size from largest to smallest. For each planet, include its name and one unique characteristic.”

By specifying the format, you’re more likely to get a response that’s easy to read and use.

D. Examples of clear vs. ambiguous prompts

Let’s put it all together with some before-and-after examples:

Example 1: Ambiguous: “What’s good for headaches?”

Clear: “List 5 evidence-based, non-pharmaceutical remedies for tension headaches in adults. For each remedy, briefly explain how it works and any precautions to consider.”

Example 2: Ambiguous: “Tell me about climate change.”

Clear: “Summarize the three most significant impacts of climate change on global agriculture as reported in peer-reviewed studies from the past 5 years. Include specific examples and, if possible, quantitative data.”

Remember, being clear and specific isn’t about writing a novel in your prompt. It’s about giving the AI the right cues so it can give you the best possible answer. Think of it as giving good directions to a very eager but literal-minded helper.

Best Practice 2: Use Examples

You know how sometimes it’s easier to show someone what you want rather than just tell them? That’s exactly what we’re doing here with our AI friends. It’s like giving them a little sneak peek of what we’re looking for.

A. Importance of providing sample outputs

Imagine you’re trying to teach a friend how to make the perfect sandwich. You could list all the ingredients and steps, or you could show them a picture of the finished sandwich. Better yet, you could make one right in front of them. That’s what providing sample outputs does for an LLM.

Why is this so powerful? Well, it:

  1. Sets clear expectations: The AI gets a concrete idea of what you’re after.
  2. Demonstrates style and format: It’s like showing the AI your “vibe”.
  3. Helps with complex or nuanced tasks: Sometimes, it’s hard to explain what you want in words alone.
  4. Improves consistency: The AI can pattern-match more effectively.

Here’s a quick example:

Instead of: “Write a haiku about summer.”

Try this:

“Write a haiku about autumn. Here’s an example of a summer haiku for reference:

Sizzling sidewalks bake Cicadas sing in oak trees Ice cream truck chimes ring

Now, create a similar haiku but about autumn.”

See how that gives the AI a clear template to work with?

B. Implementing few-shot learning

Now, let’s kick it up a notch with few-shot learning. This is like giving your AI multiple practice runs before the main event.

Few-shot learning is when you provide not just one, but several examples before asking the AI to perform a task. It’s super effective for more complex or nuanced requests.

Here’s how you might use it:

I want you to generate product descriptions for eco-friendly kitchen gadgets. Here are three examples:

  1. Bamboo Utensil Set: Elevate your cooking game while loving the planet. Our durable bamboo utensil set brings sustainable style to your kitchen. Heat-resistant, non-scratch, and naturally antimicrobial.
  2. Beeswax Food Wraps: Say goodbye to plastic wrap! These colorful beeswax wraps keep your food fresh naturally. Reusable, biodegradable, and infused with jojoba oil for extra food-safe protection.
  3. Coconut Fiber Dish Scrubber: Tough on grime, gentle on the Earth. This 100% biodegradable scrubber tackles dirty dishes with ease. Naturally antimicrobial fibers ensure a hygienic clean every time.

Now, using a similar style and format, create a product description for a set of reusable silicone food storage bags.”

This approach is like giving the AI a mini-training session right in your prompt. It helps the model understand not just the format, but also the tone, level of detail, and specific elements you want to include.

Few-shot learning is particularly useful for:

  • Writing in specific styles or formats
  • Solving problems with a particular methodology
  • Generating content with consistent structure
  • Handling tasks that require a nuanced understanding

Remember, the key is to make your examples diverse enough to show the range of what you’re looking for, but similar enough that the AI can pick up on the common patterns.

Pro tip: If you’re working on a big project that requires multiple, similar outputs, spend some time crafting really good examples at the start. It’s a bit more work upfront, but it can save you tons of time in the long run by improving the consistency and quality of the AI’s outputs.

Best Practice 3: Break Down Complex Tasks

You know those days when your to-do list looks like a novel? That’s how AI can feel when faced with a complex query. So, let’s learn how to make things more manageable for our digital friends!

A. Dividing queries into manageable steps

Think of this as creating a recipe for your AI. Instead of saying “Make a gourmet meal,” you’re listing out each step of the cooking process.

Why is this so effective?

  • It prevents overwhelm: The AI can focus on one thing at a time.
  • It improves accuracy: Each step can be handled more precisely.
  • It allows for better error checking: You can spot issues in individual steps more easily.
  • It makes the process more transparent: You can see how the AI is approaching the problem.

Here’s a quick example:

Instead of: “Analyze the impact of social media on teenage mental health and suggest solutions.”

Try this:

“Let’s analyze the impact of social media on teenage mental health and suggest solutions. Please approach this in the following steps:

  • First, list the top 3 most popular social media platforms among teenagers.
  • For each platform, identify one positive and one negative impact on teenage mental health.
  • Summarize the overall trends you notice from this analysis.
  • Based on these trends, suggest 3 practical solutions that could mitigate the negative impacts.
  • Finally, provide a brief conclusion that ties everything together.”

See how we’ve turned a complex task into a series of more manageable steps? It’s like creating a roadmap for the AI to follow.

B. Using step-by-step approaches for multi-part problems

Now, let’s look at how we can apply this to more complex, multi-part problems. This is where things get really interesting!

Imagine you’re working on a business project and need the AI’s help. Here’s how you might break it down:

“I need help developing a marketing strategy for a new eco-friendly water bottle. Let’s approach this step-by-step:

  • Target Audience: Identify 3 potential target audience segments for this product. For each segment, provide a brief description and explain why they would be interested in an eco-friendly water bottle.
  • Competitive Analysis: List 3 existing eco-friendly water bottle brands. For each, briefly describe their unique selling proposition and one thing they do well in their marketing.
  • Unique Selling Proposition (USP): Based on the target audience and competitive analysis, suggest a USP for our new water bottle. Explain the reasoning behind this USP.
  • Marketing Channels: Recommend 3 marketing channels that would be effective for reaching our target audience. For each channel, explain why it’s suitable and suggest one specific marketing activity we could do on that channel.
  • Budget Allocation: Assuming we have a marketing budget of $50,000, suggest how we might allocate this across the recommended channels. Provide a brief rationale for this allocation.
  • Success Metrics: Propose 3 key performance indicators (KPIs) we should track to measure the success of this marketing strategy. Explain why each KPI is important.
  • Timeline: Outline a basic 3-month timeline for implementing this marketing strategy, highlighting key milestones or activities.

After completing these steps, please provide a brief summary (2–3 sentences) of the overall marketing strategy.”

This approach has several benefits:

  • It ensures all aspects of the problem are addressed.
  • It allows you to provide specific instructions for each part.
  • It makes it easier to review and refine individual elements of the response.
  • It guides the AI through a logical thought process.

Pro Tips:

  • Number your steps: It makes it easier for both you and the AI to reference specific parts.
  • Be consistent: Try to make each step a similar level of complexity.
  • Use action words: Start each step with a clear verb (identify, list, suggest, etc.)
  • Ask for reasoning: Where appropriate, ask the AI to explain its choices. This can provide valuable insights.

Remember, breaking down complex tasks isn’t just about making things easier for the AI. It also helps you clarify your own thinking and ensures you’re asking for all the information you need.

Best Practice 4: Leverage Role-Playing

Remember when you were a kid and you’d pretend to be a doctor, chef, or superhero? Well, we’re bringing that playful spirit to our AI interactions, and trust me, the results can be amazing!

A. Assigning specific roles or personas to the AI

Think of this as giving your AI a costume and a character to play. By assigning a specific role, you’re providing a framework for how the AI should approach a task or question.

Why is this so powerful?

  1. It provides context: The AI can draw on specific knowledge associated with that role.
  2. It sets the tone: Different roles come with different communication styles.
  3. It encourages creativity: The AI can “think” from a new perspective.
  4. It can make complex topics more accessible: Technical info can be explained in role-appropriate ways.

Here are some fun examples:

Instead of: “Explain quantum computing.” Try: “You’re a quirky scientist on a children’s TV show. Explain quantum computing to your young audience using everyday objects as analogies.”

Instead of: “Give me tips for public speaking.” Try: “You’re a charismatic TED Talk coach. Give me your top 5 tips for delivering a memorable presentation.”

B. Framing tasks in terms of the assigned role

Now that we’ve given our AI a role, let’s see how we can frame our tasks to really bring that character to life!

Example 1: The Historical Figure

“You are Leonardo da Vinci, the Renaissance polymath. I’m a 21st-century inventor seeking your advice. How would you approach designing a flying car, given your experience with both art and engineering? Please include:

  1. Your initial thoughts on the concept
  2. Three key principles you’d apply from your own inventions
  3. A sketch description of your initial design idea
  4. Any warnings or advice you’d give based on your experience”

Example 2: The Expert Professional

“You’re a seasoned cybersecurity expert with 20 years of experience in the field. A small business owner has approached you for advice on protecting their company from cyber threats. Please provide:

  1. An explanation of the top 3 cybersecurity risks for small businesses in 2024
  2. Practical, cost-effective solutions for each risk
  3. A basic weekly checklist for maintaining good cybersecurity hygiene
  4. Your professional opinion on whether they should hire a full-time IT security person or use a managed service provider”

Pro Tips for Role-Playing Prompts:

  1. Be specific about the role: The more details you provide about the character or expert, the more tailored the response will be.
  2. Stay in character: Frame your follow-up questions or tasks in a way that fits the scenario you’ve created.
  3. Mix it up: Try assigning unexpected roles for fresh perspectives. For example, “You’re a marine biologist. How would you approach improving a city’s public transportation system?”
  4. Use it for comparison: Assign different roles to analyze the same problem from various angles.

The beauty of role-playing in prompts is that it not only makes the interaction more engaging but can also lead to unique insights and creative solutions. It pushes the AI to draw connections between different domains of knowledge in interesting ways.

Best Practice 5: Iterate and Refine

Think of this as the “practice makes perfect” of the AI world. Just like a chef tweaking a recipe or a musician fine-tuning a composition, we’re going to perfect our prompts through trial, error, and refinement.

A. Testing different prompt formulations

This is all about experimenting with different ways to ask the same question. It’s like trying to find the perfect way to explain something to a friend — sometimes you need to rephrase it a few times before it clicks.

Why is this important?

  1. Different phrasings can yield different results
  2. It helps you understand how the AI interprets various instructions
  3. You might stumble upon a formulation that works better than you expected

Let’s look at an example:

Initial prompt: “Tell me about climate change.”

Iterations:

  1. “Summarize the key impacts of climate change on global ecosystems.”
  2. “Explain climate change as if you’re talking to a 10-year-old.”
  3. “List the top 5 contributors to climate change and their percentage of impact.”
  4. “Compare and contrast the effects of climate change in polar regions vs. tropical areas.”

Each of these prompts will likely give you different aspects of climate change information. By testing these variations, you can find which one best suits your needs.

B. Analyzing outputs and adjusting prompts

This is where we put on our detective hats and really examine what the AI is giving us. We’re looking for clues about how to make our prompts even better.

Here’s how you might approach this:

  1. Identify what’s missing: Is there key information the AI didn’t include?
  2. Look for misunderstandings: Did the AI interpret part of your prompt incorrectly?
  3. Check for relevance: Is all the information provided actually useful for your needs?
  4. Assess the format: Is the output structured in a way that’s easy to use?

Based on this analysis, you can then adjust your prompt. For example:

Original prompt: “Explain the process of photosynthesis.”

Output analysis: The explanation was technically correct but too advanced for a general audience.

Adjusted prompt: “Explain the process of photosynthesis in simple terms, as if you’re teaching a middle school science class. Use everyday analogies to illustrate key concepts.”

C. The iterative process of prompt optimization

This is where we bring it all together into a cyclical process of continuous improvement. Think of it as a feedback loop:

  1. Draft initial prompt
  2. Get AI response
  3. Analyze the output
  4. Identify areas for improvement
  5. Refine the prompt
  6. Repeat steps 2–5 until satisfied

Let’s see this in action with a real-world example:

Scenario: You’re trying to get the AI to help you brainstorm names for a new eco-friendly cleaning product line.

Iteration 1: Prompt: “Suggest names for an eco-friendly cleaning product line.” Result: Names were too generic (e.g., “Green Clean,” “Eco Shine”)

Iteration 2: Prompt: “Suggest creative, nature-inspired names for a high-end eco-friendly cleaning product line. The names should evoke feelings of freshness and purity.” Result: Better, but still missing the mark (e.g., “Mountain Breeze Cleaners,” “Pure Meadow Sprays”)

Iteration 3: Prompt: “You’re a branding expert specializing in eco-friendly products. Create 5 unique, memorable names for a luxury eco-friendly cleaning product line. Each name should:

  1. Be no more than two words
  2. Include a subtle nod to nature without using obvious words like ‘green’ or ‘eco’
  3. Convey a sense of effectiveness and sophistication
  4. Be easy to pronounce and remember For each name, provide a brief explanation of its appeal and relevance to the brand.” Result: Much improved, with unique and fitting suggestions

Pro Tips for Iterating and Refining:

  1. Keep a log of your prompts and their results. This helps you track what works and what doesn’t.
  2. Don’t be afraid to make big changes. Sometimes a complete rewrite is better than small tweaks.
  3. Test your refined prompts with different scenarios to ensure they’re versatile.
  4. Remember that the ‘perfect’ prompt might change depending on the specific task or context.

Iterating and refining is where the magic happens in prompt engineering. It’s a skill that improves with practice, so don’t get discouraged if your first attempts aren’t perfect. Each iteration is a step towards mastery!

Best Practice 6: Mind the Context Length

A. Balancing detail with conciseness

  • Aim for “Goldilocks” prompts: not too long, not too short
  • Include essential details only
  • Use clear, precise language to convey information efficiently

B. Understanding and working within token limits

  • Different AI models have different token limits
  • Tokens ≈ words, but not exactly (e.g., “hamburger” = 1 token, “ham burger” = 2 tokens)
  • Be aware of your model’s limits (e.g., GPT-3 = ~4000 tokens)
  • Longer isn’t always better — quality over quantity

C. Strategies for handling long contexts

  1. Summarize: Condense long information into key points
  2. Chunk it: Break long prompts into multiple, related queries
  3. Use references: Refer to previous conversations or external sources
  4. Prioritize: Put the most important information first
  5. Edit ruthlessly: Cut any unnecessary words or details

Remember: Efficient prompts lead to better, more focused responses. Keep it clear, keep it relevant, and keep an eye on that token count!

--

--

Sahin Ahmed, Data Scientist

Data Scientist | MSc Data science|Lifelong Learner | Making an Impact through Data Science | Machine Learning| Deep Learning |NLP| Statistical Modeling