With Generative AI, It’s Quality In, Quality Out

Prompt engineering can help deliver better outputs

Susan Coleman
Slalom Data & AI
4 min readSep 21, 2023

--

Where do you stand on generative artificial intelligence (generative AI, or GenAI)? An overhyped trend or invaluable enterprise technology? Consider this comment from an executive interviewed by CIO.com:

“I believe generative AI is a game changer at a fundamental level … I was at a Gartner conference in the US where they called generative AI out as the ‘third digital revolution’ after mainframe computing and the internet. The impact could really be that profound …”

A recent survey conducted by Salesforce showed that 86% of the IT leaders polled are largely in agreement with this statement and believe that GenAI will play a prominent role in their organization in the near future. That same study, however, found that 64% of the survey respondents have concerns about GenAI’s ethics.

Are the expected benefits of generative AI enough to offset such concerns? The Wall Street Journal has proclaimed that GenAI “by some estimates, stands to double the rate of U.S. productivity growth after a decade of widespread adoption and add trillions of dollars a year to global economic output.” With such strong indicators of the magnitude of GenAI’s impact, Slalom believes that GenAI should be a part of your organization’s digital strategy going forward. Addressing any concerns you have about the technology will be key to that effort.

Recognizing and overcoming generative AI risk

While it’s true that there is some risk involved in using generative AI — largely from allowing bias and inaccuracies or protected, confidential, or sensitive information to infiltrate your GenAI outputs — recognizing the potential sources of these issues and employing tactics to avoid them isn’t as difficult as you might think.

The first step is to be aware of the various touchpoints where the issues could find their way in. Tools such as ChatGPT have been trained on a curated set of data and then further guided by human feedback to improve the outputs they can deliver. With these types of tools, however, the vast quantities of training data being used can’t possibly be completely cleansed of all bias or inaccuracy.

If you’re building your own GenAI models with a technology like Microsoft Azure OpenAI, you have much more control over the data being used to train the models. But this doesn’t mean that your outputs will be entirely clean and ready to ship to their intended audiences. Regardless of whether you’re using a pre-built or a custom-built generative AI model, your outputs will always require human vetting.

But, as seen in the diagram below, your training data isn’t the only entry point for bias and inaccuracies. There are numerous phases throughout the generative AI workflow where you need consistent monitoring to avoid problematic content. Rather than putting all your focus on reviewing and refining your GenAI outputs at the back end of the process, Slalom recommends leaning into prompt engineering as a front-end method for helping the technology generate higher-quality outputs.

Graphic depicting a generative AI workflows and the touchpoints throughout that cycle where bias and misinformation can creep in.

Quality in, quality out

Generative AI systems work by responding to an input. Therefore, the input becomes critical to obtaining quality responses. Without effective inputs, models are more likely to make mistakes, experience model hallucinations, and produce harmful content.

Prompt engineering is a means of optimizing your inputs for the purpose of obtaining better quality outputs. Or, even more simply, it’s “the writing, rewriting, and refining of prompts to teach the AI what ‘good’ outputs look like.” In this sense, “good” means both outputs that answer the question or address the request put to the GenAI tool as well as output that’s free from problematic content.

The basic GenAI workflow: write a prompt, run it through the GenAI model, receive output.
A basic GenAI workflow

So, what are some best practices for prompt engineering that can help you optimize your GenAI outputs?

  • Focus your prompt on a specific problem, topic, question, or output type. Multiple asks in a single prompt can make it difficult for a model to answer effectively.
  • Provide context. Contextual information such as conversation history, examples of desired output, or the model’s role in the prompt helps the model solve the task and generate more appropriate responses.
  • Alter prompt wording. Change the prompt to align with the model’s capabilities and even ask the model for help to optimize the prompt.
  • Specify the desired tone (formal, casual, informative, persuasive) and define the format or structure (essay, bullet points, outline, dialogue).
  • List important keywords, phrases, or terminology to be included as well as any terms that should be avoided.
  • Provide instruction as to desired style, structure, or content, such as asking the AI to use analogies or examples to clarify concepts.

A well-balanced generative AI program

A successful GenAI program requires a bit of a balancing act. It’s important to take steps like the ones outlined here to mitigate risk and increase the chances of delivering truly high-quality outputs. But agility and experimentation are just as important to getting the most out of generative AI. If you’d like to learn more about achieving this balance, download our whitepaper or reach out to us directly to speak to our experts.

Slalom is a global consulting firm that helps people and organizations dream bigger, move faster, and build better tomorrows for all. Learn more about Slalom’s human-centered AI approach and reach out today.

--

--

Susan Coleman
Slalom Data & AI

Content creator and storyteller, focusing on tech topics. Manager, Content — Google & Microsoft at Slalom Consulting.