Member-only story
Prompt Engineering Guide
Principles, Techniques, and Applications to Harness the Power of Prompts in LLMs as a Data Analyst
Large Language Model (LLM) is on the rise, driven by the popularity of ChatGPT by OpenAI which took the internet by storm. As a practitioner in the data field, I look for ways to best utilize this technology in my work, especially for insightful-yet-practical work as a Data Analyst.
LLMs can solve tasks without additional model training via “prompting” techniques, in which the problem is presented to the model as a text prompt. Getting to “the right prompts” are important to ensure the model is providing high-quality and accurate results for the tasks assigned.
In this article, I will be sharing the principles of prompting, techniques to build prompts, and the roles Data Analysts can play in this “prompting era”.
What is prompt engineering?
Quoting Ben Lorica from Gradient Flow, “prompt engineering is the art of crafting effective input prompts to elicit the desired output from foundation models.” It’s the iterative process of developing prompts that can effectively leverage the capabilities of existing generative AI models to accomplish specific objectives.