How To Use Generative AI the Right Way

AI is dumb but you can make it smarter in the way you use it

Zaigam Akhtar
Lampshade of ILLUMINATION
9 min readJan 28, 2024

--

Generative AI tools like ChatGPT and Google Bard may have been developed for convenience but to get the desired results, you still have to use your human brain. Let me explain. (Wasn’t trying to make that rhyme :)

Contrary to popular belief, AI tools are quite dumb. They spit out what you feed them with little to no intuitive effort. It’s YOU, the prompter who needs to put in the effort (at least a little) to make these tools work how you want them to.

I believe that though these tools may not be that smart yet, they can still produce meaningful content, provided you use a well-structured prompt.

With that said, I’m gonna keep this post confined only to LLMs as I haven’t experimented enough with other types of AI models yet like Midjourney and Stable Diffusion.

Using generative AI to create content is easy but using it to create impactful content is a skill that you must learn.

The Power of Prompt Engineering

Prompt engineering may seem like a pseudo-skill at first but there’s more to the picture than meets the eye.

You see, any idea in its early stage will have its share of criticism. Fair enough, domains like prompt engineering are in the evolving stage of development as is GenAI. This might just be one of the most sought-after skills in the future! (OR NOT)

The Prompt (Screenshot by Author)
The Output (Generated in DALLE-3)

XIS10CIAL in their LinkedIn post put it best:

“Like many emerging scientific fields, prompt engineering is in its nascent stage, where experimentation and diversity of approaches are essential for growth and development.”

If you seek the definition you’ll find — Prompt engineering is a technique used in artificial intelligence to improve and fine-tune language models for specific tasks and desired outputs. (Google Bard answered that 🤯)

People are missing the main idea here. It is made of two words PROMPT and ENGINEERING — the former meaning to provide instructions (in this context) and the latter is, to engineer or design.

Let’s focus on the former part. Prompt or prompting is different from asking questions and getting results. You need to provide instructions, meaning you have to curate it. The better your instructions, the better will be the results.

If your prompt is weak, how the heck do you expect the model to generate gold?

Prompt engineering or prompt designing is a burgeoning skill that you can acquire if you take the right steps. More than that, it’s vital to understand how it works if you want to leverage the power of generative AI in your creative or personal endeavors.

How to Create Effective Prompts

Before wrapping your head around effective prompts, you need to understand the scope of the model you’re working with.

Let’s consider an AI model designed for language translation. Understanding its scope means recognizing that this model is specifically trained to translate text from one language to another, maintaining accuracy and context.

Now, suppose you’re working with this translation model and need to translate a technical manual from English to French. Before creating your prompt, you recognize that the model’s scope includes understanding technical jargon and context in both languages.

So, your prompt isn’t just a simple request for translation. Instead, it might be something like, “Translate the following technical manual from English to French, ensuring that all technical terms are accurately conveyed and the professional tone is maintained.”

This prompt is effective because it aligns with the model’s capabilities (language translation) and the specific context (a technical manual), leading to a more accurate and contextually appropriate translation.

This is where the notion of “Prompt Priming” comes into the picture. But we’ll talk about it later. For now, let’s explore some of the techniques to curate effective prompts:

  1. Keep it Specific

This one is a no-brainer. The AI model can only generate an output to your expectations depending on the specificity of your instructions/prompt.

For example, a prompt like “ Write a dialogue between two astronauts” may steer in any direction as you haven’t specified what exactly is the context of their conversation.

However, if you make it a bit more specific like, “Draft a dialogue between two astronauts on Mars who have just discovered evidence of ancient microbial life, focusing on their excitement and the scientific implications.”, you’ll get a more relevant output. (Try it!)

A not-so-specific prompt
A more specific prompt leads to a more relevant output

2. State the Result Format

Steer the model to generate the result in the type of format you desire. You should always state the output format in your prompt to get the best results.

The following prompt is a good example: Tell the difference between an optometrist, an ophthalmologist, and an optician in a table format. Use at least 6 rows in the table and keep the sentences short.

ChatGPT output in a Table Format

3. Provide Context and Background

With the new Bard update and GPT-4, you can ask the model to scrape the web for specific information. Though that works, feeding background information manually gives better results.

If the prompt requires specific knowledge or references a particular situation, provide enough background information in the prompt for an effective response.

Make sure to edit the result later on and add your perspective to it, otherwise, it will be simply a case of paraphrasing.

4. Use Roleplay

Giving the model a voice or role to imitate works wonders when you’re trying to generate something different.

When you specify a role at the start of your prompt, it will be able to generate something with a little more personality instead of a generic output when no role is specified.

Consider the following prompt:

“You are a successful horror writer and poet. Write a grim poem on the tribulations of grief. Write in the voice of Edgar Allan Poe.”

GPT-4 Output:

“In the somber shades of night’s embrace,
Where shadows weave with ghostly grace,
There lies a heart, so wrought with grief,
A soul in pain, seeking relief.”

Beneath the moon’s lamenting light,
Through corridors of endless blight,
Wanders the spirit, lost and torn,
A wraith of sorrow, bleak and worn.

“O, woe! The sting of love now lost,
On tempest seas, my heart was tossed.
The cruel hands of fate did sever,
My dearest love, gone forever.”

In the crypts of silent screams,
Echoes of forsaken dreams,
The mourner weeps, alone, bereft,
In the hollow halls, where joy has left.

“Curse the stars that coldly gleam,
Haunting me in my tortured dream.
For in their light, I see her eyes,
A spectral vision, my demise.”

Through the veil of endless night,
No dawn breaks, no hope in sight.
In the throes of grief’s unyielding grasp,
Each breath a ghostly, withered gasp.

“Here, in the depths of despair’s abyss,
Where shadows kiss with death’s cold bliss,
I lay my heart, a shattered shrine,
To love lost in the sands of time.”

So in the darkness, let me dwell,
A soul condemned to this living hell.
For in this grief, I am entwined,
A poet of the damned, resigned.

Now let’s see the result of a prompt like, “Write a horror poem.” —

In the dark and gloomy night,
Where the moon was not so bright,
Lived a ghost, not seen but there,
Floating, floating in the air.

“Ooooh,” it wailed, but not too clear,
Trying to spread its ghostly fear.
But alas, it wasn’t so spooky,

….
Let’s just not continue with this one. 😬

5. Mention the Constraints

Sometimes Generative AI tends to generate more than is necessary, especially in text-to-text-based models. In that regard, you must specify the constraints in your prompt to get a precise response.

Simply put, you have to tell the model what NOT to create along with the specifics of what you intend it to do. Not only will it be faster but more aligned with your requirements.

It may seem counter-productive at first but remember when it comes to creativity, less is more. (That’s true in AI creativity as well)

Let’s understand this with an example:

ChatGPT Output Screenshot

Notice how the output generated is filled with unnecessary emojis, exclamation marks, and superficial marketing speak? Not using prompt constraints will lead to such results which isn’t ideal.

Now check out the prompt where I list multiple constraints along with the other tips we’ve discussed so far:

A prompt with the output where constraints are specified

What is Prompt Priming?

Put simply, prompt priming is directing or steering the model to create a more nuanced output using focused and pre-curated techniques. It’s like handing over the model a roadmap to follow so that it stays aligned with the task at hand and does not drift in unasked directions.

In essense, Prompt priming is your personal steering wheel for AI conversations.

Whether you need a quick, straight-to-the-point answer, hunting for an out-of-the-box idea, or desiring a response with a specific flair, prompt priming is the way to go.

So, whatever tips and techniques we’ve discussed so far are all aspects of prompt priming. To summarize: Specificity, Formatting, Contextualization, Roleplay, and Constraints.

That said, there are a few more points under prompt priming to take your prompt game even one step further but I’m afraid that’s beyond the scope of this article. If you’re interested in exploring further, I’d recommend you check out this blog post: https://medium.com/aimonks/what-is-priming-the-prompt-1f12dcb855a8

Prompt Engineering is not a Job, it’s a Skill

Prompt Engineering is not a hard-skill. It’s something that anybody can learn without any prior technical background. That doesn’t mean that it’s not valuable.

As more and more AI tools become commonplace in our daily lives, skills like prompt engineering will be perceived as a basic skill to possess. It’ll be similar to having basic knowledge of working with emails, documentation tools, and such in most white-collar jobs. This means that the special advantage people have now from knowing how to use language models effectively will soon be common.

While it’s true that learning how to use these language models is important, this skill will quickly become just a basic requirement for most office jobs, not a unique advantage.

Techies, creatives, salespeople, and social workers, all will be expected to use these tools to do their jobs better and faster. They will all be using these language models to help with their work, a skill that’s becoming essential.

However, as I mentioned earlier it’s an evolving field and you have to keep yourself updated with the recent developments to keep up. For now, concepts like prompt priming and fine-tuning are prevalent, in the future, there’ll be more.

So if you want to be ahead of the game, it’s good to learn about using generative AI effectively now. Remember not to go about expecting lucrative job prospects in prompt engineering as that seems unlikely.

My social links: LinkedIn| Twitter | Instagram

If you liked this post or found it insightful, please take a minute to press the clap button, it increases the post visibility for other medium users. Thank You :)

--

--

Zaigam Akhtar
Lampshade of ILLUMINATION

A storyteller trying to find stories in people, places, & experiences worth sharing. I write on a whim about Tech, Books, Films, Self-Improvement, & Poetry. 🌻