On Building Products with LLMs
You have a business problem that you want to try to solve with an LLM. You try several prompting approaches but each one falls short. Infuriatingly, peppering your prompt with “please” and “thank you” does not help. Unlike broken code, there’s no clear error and there’s no clear fix. This is why you see so many listicles promising “10 ChatGPT prompts that will change your life!” Tempting as they are, these get you nowhere if what you want is to solve a complex problem specific to your product or business use case.
This was my experience. I felt despair and frustration. Building with LLMs is incredibly opaque, tedious, and uncomfortable. I resisted building LLM-powered features.
Now, several months into building our AI-Powered Challenge Coach, I’ve developed reliable strategies to work with LLMs. I still feel the despair and frustration often, but I have tools to help me move forward. This is what I do:
I still feel the despair and frustration often, but I have tools to help me move forward.
Understand the classes of problems that LLMs are great at solving
Know how to recognize an inference problem, a classification problem, a transformation problem, or a summarization problem. Learn how to prompt to solve these classes of problems. When you have the ability to recognize these problem patterns, you’ll no longer feel like you are swinging at a piñata with a blindfold. You’ll be able to see the promising avenues to approach a complicated problem. I got my foundation from this course from DeepLearning.ai: ChatGPT Prompt Engineering for Developers.
Break down big problems
In my experience, LLMs do not reliably solve more than one problem at a time. Most business problems worth solving are more complicated than a single inference or transformation. When you’re comfortable with the problem classes and how to solve them, use this knowledge to break down your business or product problem into smaller problems that are easier for the LLM. Solve those sub-problems independently. Your product development challenge is now to weave these multiple solutions elegantly into a single user journey or feature.
When you have the ability to recognize these problem patterns, you’ll no longer feel like you are swinging at a piñata with a blindfold.
Get your whole team involved in engineering
Finally! The interface to our engineering work is natural language instead of code. This means that non-coders can get in on the engineering action. When approaching an LLM-powered feature at Practica, we separate out the prompt engineering pieces as separate tasks, discuss what the shape of the input and output should be, and then have our non-coders work on the prompt while the engineering team builds product and infra around it. Features come together really fast this way.
All of this is high-level and hard to fully understand and implement without details and examples. I would love to expand on each point and provide case-studies from the Practica team’s experience building our AI Challenge Coach. If that’s something you want to see, please engage with this post so I know you’re out there.