Sitemap
about ai

Diverse topics related to artificial intelligence and machine learning, from new research to novel approaches and techniques.

How to do prompt engineering

--

A simple guide to improve the results of your LLM application by improving the prompts in a systematic manner.

Introduction

Large language models (LLMs) are transformer networks trained to generate an output following “the instructions” given in the input (or prompt). To improve this output in artificial neural networks, in a traditional deep learning task, the weights are changed using an error signal. In LLMs, it is a bit different, to produce a different output, the weights can be updated if the input is the same or, since we can change the input and produce a different output and the weights stay the same. That is, to improve the output according to some criteria or metric for a certain task, an improvement of the input or prompt is necessary. The process of changing the prompt to increase performance of the LLM for the task at hand is referred to as prompt engineering. In this post, I recapitulate some of the most used techniques to modify the prompts to improve the performance of an LLM at a particular task.

Prompting techniques

At the core, an LLM is a transformer network trained to produce the most likely token that should follow a given input (or context). For the LLM to produce a different output, either the input should be modified, or the network weights, or both. Out of these, the easiest option to improve the output of an LLM is changing its input (or prompt). There are many techniques to change the prompt. See a classification of techniques in the…

--

--

about ai
about ai

Published in about ai

Diverse topics related to artificial intelligence and machine learning, from new research to novel approaches and techniques.

Edgar Bermudez
Edgar Bermudez

Written by Edgar Bermudez

PhD in Computer Science and AI. I write about neuroscience, AI, and Computer Science in general. Enjoying the here and now.

No responses yet