Prompt Design withVertexAI

Luz De Leon
Frontend at Accenture
5 min readSep 2, 2024

In GenAI, Prompt Design is the process of creating prompts that evokes the desired response from language models.

Purple lilies over cyan background

Disclaimer: Responsible use of GenAI tools involves recognizing the critical thinking and analysis of human capabilities. The examples used in this articles are strictly for demo purposes only.

They are a crucial part of ensuring accurate responses from a language model. And the iterative process of updating prompts so we’re sure we will get the more accurate response from the model, is called Prompt Design (or Prompt Engineering).

Due to the nature of language models 🧠, it might become easy to forget, we’re having a conversation with an advanced program that is based on probability, and not with a living computer, so it can become easier to fall into injecting unnecessary padding to the prompt, expressions like “Could you please..Thank you” are a perfect example, they can reduce the model performance and in more drastic cases, add bias.

While there is no best practice to design the prompts (as of 2024 — it’s more a progressive effort), there are a couple of methods you can use to shape the response, for this (hopefully quick tutorial) I’ll use the Cloud I’m learning 👩🏻‍💻, GCP ☁️, and it’s machine learning platform, Vertex AI, to apply Prompt design in Gemini model.

Vertex AI offers a fully-managed, end-to-end platform for data science and machine learning. If you’re training your own model with your own data, you can use it, it also comes with AutoML , a feature that allows you to train your model with very low effort and expertise. You’re charged for what you consume — check out the pricing charts.

For this tutorial, I’m using my personal instance that has billing information, but if you have a license of A Cloud Guru, you can open a sandbox.

**Not sponsored 😁 **

Select the cloud instance you want to start

Set-up

  1. Go to cloud console
Cloud console

2. Search Vertex AI and Enable API

Search result — Google Cloud Console

3. Select Language

4. Select Generate Text > Text Prompt

Select Generate Text > Text Prompt

Structure of a Prompt

Magic formula for a prompt is to contextualize the tool, throw your request and add optional parameters 🪄.

  1. Context: Ask the tool to play a role. Example: “As an Angular Front End Development assistant, …”
  2. Request: The action to be performed. Example: “Review the following block of code”.
  3. Parameter: Any type of modifier. Example: “In a paragraph of 3 lines, write a statement of the main root cause of this defect”

Prompt Forms

In the Vertex AI interface, you will find there is a toggle button with

Freeform and Structured mode in Vertex AI

Free form shots

No examples — just ask.

Example of a free form prompt — Ideal when you’re just looking for ideas.

Structured prompts

You provide n-number of examples you want to try, starting from 1 to a series of examples to include in your design. Prompt design methods

In the next point, we will go over the methods in the tool.

Prompt Design Methods

Zero-shot prompt

Select the FREEFORM to enable the free form method.

You’re only giving a prompt that describes the task with no example data to follow.

Example 1: What is a prompt?

Example of a zero-shot prompt in Gemini app.

Example 2: Live from Vertex AI

Zero-shot prompt live from cloud console

One-shot prompt

You give an example of what is being asked.

In Vertex AI, select STRUCTURED and see the 3 sections available.

  1. Context: What will be your ask and the role your agent will play in order to perform the task.
  2. Examples: An example of
  3. Test: Continuation of the task to be performed, based on the example provided.

Few-shot prompt

Provide a small number of examples of the task that it is being asked to perform. For example, if you want the model to write a news article for the Front End blog, you might give it a few news articles to read.

Example 1: List the top 3 artists of particular art movements based on list of examples

Example 2: Writing a blog article of potential uses of Github Copilot based on articles existing in the internet

Parameters

Parameters sidebar in VertexAI

Temperature

Temperature controls the degree of randomness in token selection.

Lower temperatures are good when you’re expecting a true or correct response.

Higher results are good when you’re expecting a diverse and creative response. However, be careful, you might need to rely on the accuracy of your data to not lead to biased results.

Temperature range values depend on the model you’re working on. The one we’re using in this example goes from 0 to 2.

In the example below, notice how the language changes when switch the parameter value from 0 to 2, this will give us 2 polarized results.

Use case: Explain art to a 5 year old.

Output Token Limit

At times, the model becomes verbose on short answers. Limiting the output will help you to shrink it to a determined number of characters.

Example: Write a good-night story for a 5-year-old.

Takeaways

Prompt design is a good practice to refine, as users, expected results from GenAI tools. However, using a Cloud service will allow you to test and refine the prompts of your AI solution. You can get the code and implement it as an API in your app 🎉

--

--

Luz De Leon
Frontend at Accenture

Front End Developer navigating through the leadership cruise