Demystifying Prompt Engineering for the Enterprise

Lee Boonstra
Google Cloud - Community
3 min readJun 10, 2024

Prompt engineering for business applications isn’t as simple as asking a question. It’s a complex process that requires careful planning and continuous refinement, especially when you want your model to do things like summarize reports, extract insights from conversations, or answer questions based on a large knowledge base. Over the past couple of years, as a Software Engineer for the innovation factory within Google Cloud Office of the CTO, I’ve been knee-deep in prompt engineering for major business use cases, working on large innovation projects for a selection of clients, including automating drive-thru orders at Wendy’s.

Through these projects, I’ve learned that crafting effective prompts is an iterative journey, not a one-and-done task. Writing instructions and constraints that work together seamlessly takes time and effort, especially when dealing with unexpected scenarios. But the payoff is huge. In this blog series, I’m sharing my practical learnings with you. We’ll dive into the challenges of prompt engineering, explore different models and configurations, and learn how to write prompts that get the job done. Whether you’re a seasoned pro or just starting out, I hope this guide helps you unlock the power of AI for a use case beyond the generation of a to-do list.

It’s Way More Than Just Asking a Question!

Alright, let’s spill the tea about prompt engineering. There’s this misconception floating around that it’s a piece of cake, just like asking your LLM a simple question. If you’re chatting with Gemini or ChatGPT through their public interfaces for fun, yeah, then maybe that’s true. But when it comes to serious business applications, like summarizing hefty reports, extracting insights from conversations, or getting answers from massive knowledge bases, prompt engineering gets complex. Seriously!

We are not generating ideas for your next trip or a to-do list. One prompt, reused in a business application, needs to handle many different inputs, and a single wrong answer could cause real problems. We’re talking here about generated answers in specific formats, chained JSON outputs, and the potential for errors like hallucinations (when the model makes stuff up) or misinterpretations. That’s a whole different ball game than casual GenAI use. And when something goes wrong, like hallucinations, incorrect reasoning, style issues, false precision, API errors, or formatting mistakes, it can seriously do damage to your business or brand.

What Large Enterprise Prompting Projects Taught Me

In my last two years as a software engineer, I’ve been neck-deep in prompt engineering for major business projects at Google. From automating drive-thru orders at Wendy’s to generating medical summaries or Q&A on large legal documents, I’ve learned that crafting effective prompts is an ongoing process. It takes time, effort, and a lot of tweaking to get those instructions and constraints working together smoothly.

All these projects had the same things in common:

  • Constantly evolving prompts
  • Growing prompt size and complexity
  • Defining output formats (like JSON)
  • Using examples to guide the LLM
  • Carefully balancing instructions and constraints
  • Documenting everything

So, if you’re working with clients, helping them use GenAI effectively, or just curious about how I use LLMs in the real world, this guide is for you. Over the next 6 weeks, I’m sharing my practical learnings, not just generating creative poems stuff. Remember, what works for one project might not work for another — it’s all about adapting and experimenting.

Read the next blog in this series: The Foundation: Understanding LLMs and Prompt Engineering, And Why It All Matters

--

--

Lee Boonstra
Google Cloud - Community

Lee Boonstra is a Software Engineer at Google. Expertise in Generative AI & Conversational AI