What Does a Prompt Engineer Do?

Is Prompt Engineering the Career of the Future?

Ivan Campos
Sopmac AI
11 min readJan 25, 2023

--

The job market is constantly changing and evolving, and one recent development is a viral job posting for a role with a salary range of $250k-$335k. This posting has captured the attention of many job seekers, as the opportunity to earn such a high salary is rare and highly coveted.

One of the most intriguing aspects of this job posting is the title of the role — Prompt Engineer. This unique and unfamiliar title has sparked curiosity and interest among those who have come across the posting. Many are wondering what exactly a Prompt Engineer does, and what kind of skills and qualifications are required for the role.

Rephrasing the Job Opportunity

Are you ready to unlock the secrets of the new frontier of language technology?

As a prompt engineer, you’ll have the power to craft prompts that guide large language models (LLMs) to generate complex behaviors and accurate outputs. But be warned, this is uncharted territory, as the field of prompt engineering is relatively new, it’s not easy to find a well-qualified candidate. However, if you’re up for the challenge, you can demonstrate your skills by showing existing projects that demonstrate prompt engineering on language models or image generation models, or by experimenting with LLMs and demonstrating your ability to produce complex behaviors with well-crafted prompts.

The future of language technology is waiting, will you be the one to shape it?

Prompt Engineer

Imagine being the master of language, the architect of prompts that guide the most advanced language models such as GPT-3, DALL-E, Midjourney, and ChatGPT to generate relevant and accurate output. That’s exactly what a prompt engineer does. As a prompt engineer, you’ll be designing and crafting prompts that provide the model with the necessary information and context to understand the task at hand. Whether it’s providing resources on a specific topic, using specific language to guide the model’s output, or using constraints to shape the outcome, you’ll be the one calling the shots. But it’s not just about giving orders, you’ll need to have a deep understanding of the task or application, the model’s capabilities and limitations, and potential biases in the data.

Your ultimate goal is to design creative and varied prompts that encourage the model to generate interesting and varied outputs, continuously monitor and improve the prompts, and collaborate with the team to achieve the best possible outcome. To excel in this role, you should have a good understanding of machine learning, natural language processing, and related technologies, as well as programming skills.

What are the expectations for someone playing the role of Prompt Engineer

A prompt engineer is responsible for designing and crafting prompts for large language models. The role of a prompt engineer includes the following expectations:

  1. Understanding of the task: The prompt engineer should have a good understanding of the task or application that the model will be used for, and be able to design prompts that are relevant and appropriate for that task.
  2. Knowledge of the model: The prompt engineer should have a good understanding of the model’s capabilities and limitations, and be able to design prompts that are within the model’s capabilities.
  3. Creativity: The prompt engineer should be able to design creative and varied prompts that encourage the model to generate interesting and varied outputs.
  4. Clear and concise: The prompt engineer should be able to design prompts that are clear and concise, making it easy for the model to understand the task and stay on track.
  5. Ability to test and evaluate: The prompt engineer should be able to test and evaluate the model’s output, and use that information to improve the prompts and the model’s performance.
  6. Continual improvement: The prompt engineer should continuously monitor and improve the prompt’s performance and adjust them as necessary.
  7. Familiarity with data bias: The prompt engineer should be aware of the potential biases in the training data and design prompts that minimize them.
  8. Collaboration: The prompt engineer should be able to work collaboratively with other members of the team, such as data scientists, engineers, and product managers.
  9. Technical skills: The prompt engineer should have a good understanding of machine learning, natural language processing, and related technologies, as well as programming skills.
  10. Stay current with the field: The prompt engineer should keep up with the latest developments in the field and be able to apply that knowledge to their work.

Prerequisites

LLM Architecture Knowledge is a prerequisite for prompt engineers because it provides a foundational understanding of the underlying structure and function of the language model, which is crucial for creating effective prompts.

Making ambiguous problems clear and identifying core principles that can translate across scenarios is also important because it allows the engineer to clearly define the task at hand and develop prompts that can be easily adapted to different contexts.

Creating core principles that can translate across scenarios is essential for creating consistent and coherent prompts that can be used in multiple situations.

Well-crafted prompts are the final piece of the puzzle, as they are the tool that the engineer uses to communicate the task to the language model and guide its output.

Together, these skills and knowledge allow the prompt engineer to create effective and efficient prompts that can be used to train and improve the performance of the language model.

Prerequisite #1: Large Language Model Architecture Knowledge

GPT-3 (Generative Pre-trained Transformer 3) is a deep neural network architecture based on the transformer architecture introduced in the paper “Attention Is All You Need.” The architecture is designed to handle sequential data such as text, and is made up of an encoder and a decoder.

Source: Papers with Code

The encoder takes in the input text and converts it into a continuous vector representation, also known as an embedding. The encoder is made up of multiple layers of self-attention and fully connected layers. The self-attention mechanism is used to calculate attention scores for every word in the input text, which represent the importance of each word in relation to all other words. This allows the model to understand the context of the input text and the relationships between words.

The decoder then generates the output text, also using multiple layers of self-attention and fully connected layers. The decoder takes the continuous vector representation from the encoder as well as the previously generated word as input and generates the next word in the sequence.

The transformer architecture in GPT-3 is composed of multiple layers, each made up of two sub-layers: a multi-head self-attention mechanism, and a fully connected feedforward neural network. The multi-head self-attention mechanism allows the model to attend to different parts of the input sequence at different positions. The feedforward neural network is used to process the information from the self-attention mechanism.

GPT-3 has 175 billion parameters, making it one of the largest language models to date. It’s trained on a massive amount of text data, such as books, articles, and websites, and is able to generate human-like text, answer questions, and perform other language tasks. However, it’s important to note that GPT-3 still requires a lot of computational resources for training and inference, and its training data is often biased.

Do prompting techniques differ between GPT-3, ChatGPT, DALL-E, and Midjourney?

Prompting techniques can vary between different large language models such as GPT-3, DALL-E/Midjourney, and ChatGPT.

GPT-3 uses a combination of unsupervised and supervised learning, where it’s trained on a large corpus of text data and fine-tuned on specific tasks. GPT-3 has the ability to generate human-like text, answer questions, and perform other language tasks, but it’s also been known to generate biased or irrelevant outputs.

DALL-E & Midjourney are models that generate images from text prompts. DALL-E/Midjourney’s training are based on a dataset of images and their associated captions, which allows them to generate images based on a text prompt. DALL-E/Midjourney are able to generate a wide range of images, from photorealistic to abstract.

ChatGPT is a conversational language model trained on conversational data and is specifically designed to generate human-like text in a conversation setting. It has been fine-tuned on conversational tasks such as question answering, text completion and more.

The prompting techniques of these models vary based on the specific task they are designed for, the type of data they are trained on, and the specific architecture they employ.

  • GPT-3 is great at generating human-like text and performing language tasks
  • DALL-E/Midjourney generate images
  • ChatGPT is geared towards conversational tasks

Prerequisite #2: Making ambiguous problems clear and identify core principles that can translate across scenarios

Here are some best practices for making ambiguous problems clear:

  1. Define the problem clearly: Clearly define the problem and its objectives. Make sure that everyone involved in the problem-solving process understands the problem and what needs to be achieved.
  2. Break the problem down into smaller parts: Break the problem down into smaller, more manageable parts. This will make it easier to understand and solve the problem.
  3. Gather all relevant information: Gather all relevant information and data related to the problem. Make sure that everyone involved in the problem-solving process has access to the same information.
  4. Identify key stakeholders: Identify the key stakeholders who are affected by the problem, and involve them in the problem-solving process.
  5. Encourage creativity and diverse perspectives: Encourage creativity and diverse perspectives when solving the problem. Different perspectives can help to identify new solutions and overcome obstacles.
  6. Use a structured problem-solving method: Use a structured problem-solving method, such as the scientific method, or the Six Sigma DMAIC process, to guide the problem-solving process.
  7. Continuously evaluate and adapt: Continuously evaluate and adapt the problem-solving process as new information becomes available or as the situation changes.

Prerequisite #3: Create core principles that can translate across scenarios

Core principles that can translate across scenarios are:

  1. Understand the problem: Understand the problem and its objectives clearly
  2. Break it down: Break down the problem into smaller, more manageable parts
  3. Gather information: Gather all relevant information and data related to the problem
  4. Identify stakeholders: Identify key stakeholders who are affected by the problem and involve them in the problem-solving process
  5. Encourage diversity: Encourage creativity and diverse perspectives when solving the problem
  6. Use a structured method: Use a structured problem-solving method to guide the process
  7. Continuously evaluate: Continuously evaluate and adapt the problem-solving process as new information becomes available or as the situation changes
  8. Communicate effectively: Communicate effectively with all stakeholders and keep them informed of progress and any changes made
  9. Keep it simple: Keep the problem-solving process simple and avoid using jargon or overly complex language
  10. Be flexible: Be flexible and open to new ideas and approaches, don’t be afraid to pivot if a solution is not working.

By following these core principles, it’s possible to make ambiguous problems clear, and to come up with solutions that are accurate, effective, and efficient.

Prequisite #4: Well-crafted Prompts

What makes for a well-crafted Prompt?

A well-crafted prompt for a large language model is one that is clear, specific, and well-defined. The prompt should provide the model with enough information and context to understand the task and generate relevant and accurate output.

A clear and concise prompt makes it easy for the model to understand the task and stay on track. For example, a prompt such as “Write a short story about a magical narwhal” is clear and specific, and gives the model a clear goal to work towards.

A well-defined prompt also provides the model with enough context and information to generate accurate and relevant output. For example, a prompt such as “Explain Bostrom’s Simulation Argument in layman’s terms” provides the model with the specific topic and target audience, which helps the model generate an explanation that is easy to understand for non-experts.

Additionally, a great prompt also encourages the model to be creative and generate varied outputs. For example, a prompt such as “Generate a poem about AI eating Software” encourages the model to come up with different styles of poems and different ways to express the theme.

The ChatGPT Prompt Book

Top 10 Tips for getting complex behaviors from a series of well-crafted prompts:

  1. Be specific and clear: Provide the model with a clear and specific goal or task to work towards. This will make it easier for the model to understand what you’re looking for and generate more relevant output.
  2. Provide enough context: Make sure to provide the model with enough context and information to generate accurate and relevant output. For example, if you’re asking the model to generate text on a specific topic, provide it with information and resources on that topic.
  3. Use multiple prompts: Use a series of well-crafted prompts to guide the model through a complex task, rather than just one prompt. This allows you to break the task down into smaller, more manageable steps.
  4. Encourage creativity: Encourage the model to be creative and generate varied outputs. For example, if you’re asking the model to generate text, don’t limit it to a specific writing style or format.
  5. Use constraints: Use constraints to guide the model’s output. For example, you can use constraints such as length, grammar, and vocabulary to ensure that the output is relevant and appropriate.
  6. Use a diverse training dataset: Use a diverse training dataset to help the model understand different perspectives, cultures, and writing styles. This will help the model generate more varied and nuanced output.
  7. Test the model with different inputs: Test the model with different inputs and prompts to see how it behaves and identify potential issues.
  8. Use human evaluation: Use human evaluation to determine the quality of the model’s output and identify areas for improvement.
  9. Use active learning: Use active learning to fine-tune the model and improve its performance on a specific task.
  10. Experiment with different architectures: Experiment with different architectures and hyperparameters to see how they affect the model’s performance.

The effectiveness of these tips depends on the task and the model, so it’s important to experiment and try different approaches.

What does the future hold for Prompt Engineers?

It is difficult to predict exactly how the job market will evolve over the next five years, but it is likely that the role of a prompt engineer will continue to be in demand as the use of large language models becomes more widespread in various industries. With the increasing interest and development in natural language processing, machine learning, and artificial intelligence, there is a growing need for experts who can design and craft prompts that can harness the power of these models to solve real-world problems.

Large language models have a wide range of applications, from content creation to customer service, and it’s expected that this will continue to grow. As a result, there will be a continued need for experts who can design and craft prompts that are tailored to the specific task or application. Additionally, as the models get more sophisticated, there will be a greater need for experts who can fine-tune the models and improve their performance.

Prompt engineers will have to keep up with the latest developments in the field and be able to apply that knowledge to their work. As the models are being deployed in more areas, the need for transparency and responsibility will increase, which will make the role of the prompt engineer even more important.

In conclusion, while it’s difficult to predict the exact shape of the job market in five years, it is likely that the role of a prompt engineer will continue to be in demand as the use of large language models becomes more widespread.

--

--

Ivan Campos
Sopmac AI

Exploring the potential of AI to revolutionize the way we live and work. Join me in discovering the future of tech