How to become a better prompt engineer with a project debrief in GPT-4
Hold a successful project debrief with ChatGPT
A reader recently asked about my success with prompts: “Is GPT-4 self-reflexive?” because my prompts regularly call for the AI to “rethink”. No, it’s all just words. It’s a large-scale language model. It’s not sentient, and it’s not a general intelligence. It recognizes (but doesn’t understand) patterns.
It’s no more capable of introspection than an online Scrabble dictionary.
But GTP-4 is able to analyze text and inspect language. So in a way, we are able to emulate a form of self-inspection. Effectively, we can make it audit its own output—our entire conversation—and comment on which prompts worked, and what didn’t. This makes use of GPT-4’s improved capacity for advanced reasoning (rated 5/5 by OpenAI, higher than GPT-3.5’s rate: 3/5).
Of course, GPT-4 can’t use this rational information for self-improvement.
However, we can use the feedback to become better prompt engineers. My AI Whispering often involves having GPT-4 comment on the process with me in real-time. But I like to project debrief our successful conversations.
Always exit interview your chatGPT successes
Harvard Business School is keen on “Project Debriefs” (also called After Actions Reviews, or a Project Post-Mortem), and you should be too! They allow you to replicate…