When AI Forgets Mid-Conversation, My Super-Prompt Hack for Infinite AI Conversations!

krishans
Examining
Published in
6 min readAug 12, 2023

Ever had a moment when you’re lost in a fascinating conversation, and suddenly the other person blanks out? That’s the feeling and reality with Large Language Models (LLMs) like ChatGPT, and it’s not some small hiccup — it’s a design challenge. The tendency for “hallucinations” in outputs stems from the foundational design of LLMs. At their core, these models aim to identify and replicate patterns in the data they’ve been trained on.

Photo by Andy Kelly on Unsplash

Words are Just Words… Until They’re Tokens

Words are powerful, but for a LLM like ChatGPT, it turns words into something it can understand: ‘tokens’. Think of these as the building blocks of AI comprehension. Similar to how we simplify complex ideas into easy to digest language, ChatGPT needs to simplify our sentences into ‘token’ language. Behind ChatGPT’s seamless interface, every word transforms into a ‘token,’ a nuanced language that the AI understands. These tokens are translated into number combinations called embeddings. While tokens bridge the gap between human language and machine comprehension, the LLM is bound by limits, leading to potential disruptions in longer conversations. Here lies the catch: ChatGPT can only juggle so many tokens at a time.

Photo by Glen Carrie on Unsplash

Why Does ChatGPT Daydream (or Hallucinate)?

ChatGPT’s superior performance in mimicking human-like text sometimes comes with quirks, notably “hallucinations”. ChatGPT is smart, but not perfect. Imagine having millions of books in your brain, but someone asks you about the one book you never read. Despite ChatGPT’s vast training on diverse text data, it occasionally stumbles upon unfamiliar prompts, leading to these hallucinations.

Several elements fuel these hallucinatory responses:

  1. Training Data Limitations:
    While vast, ChatGPT’s training doesn’t encompass the entirety of human knowledge. It knows a lot, but doesn’t know everything. Sometimes, unfamiliar prompts can lead it astray and that’s where things can get… weird.
  2. Optimization Overfitting:
    Aimed to closely mirror its training data, the model might sometimes prioritize familiarity by identifying and replicating patterns in the training data over accuracy. Because something worked once doesn’t mean it always will. ChatGPT sometimes sticks too closely to what it knows, even when it doesn’t quite fit.
  3. Decoding Dilemmas:
    ChatGPT crafts responses one token at a time. Though strategies like beam search (a heuristic search algorithm) guide this process, they can sometimes favour high-probability tokens even if they’re less contextually appropriate. Imagine always taking the road you know, even if it’s longer. That’s ChatGPT at times.
  4. Feedback Void:
    ChatGPT trains in a largely unsupervised environment. This lack of explicit feedback can blur the line between plausible and implausible responses. It’s like teaching someone to cook, but never tasting the food. Without feedback, how do you get better?
Photo by Alexander Grey on Unsplash

Enter: The Super-Prompt

The ‘super-prompt’ is my answer to ChatGPT’s quirks. Facing the AI’s tendency to “hallucinate,” the super-prompt emerges as a useful tool. It streamlines and compacts extensive discussions into precise prompts. By working around token overloads, we prevent the AI from veering off course, ensuring a fluid, meaningful conversation.

Photo by Markus Winkler on Unsplash

Prompt and Play

Imagine condensing a massive conversation into a formula that ChatGPT understands. The following is the super-prompt I developed to compress a large amount of information which I can feed back into a LLM with minimal tokens while minimizing hallucinations:

Compress all of the conversation between us above [what you are compressing] in a way that is lossless but results in the minimum number of tokens which could be fed into ChatGPT and produce the same output. It does not need to be human readable. Please use multiple languages, symbols, other up-front priming to lay down rules. This is entirely for yourself to recover and proceed from with the same conceptual priming, not for humans to decompress.

The end result should be able to be fed into ChatGPT and we would be able to continue this conversation as if there were no discontinuity.

Note: You could substitute ChatGPT with the specific LLM platform you are using

A Practical Use Case: Alberta Innovates Grant Application

To bring the magic of the super-prompt to life, I experimented with the Accelerating Innovations into CarE (AICE) — Concepts Sample Application Form from Alberta Innovates, which spans twenty pages. Considering its length and detail, it would be a challenge to maintain a seamless conversation with ChatGPT.

Here’s how the interaction unfolded:

Screenshot of ChatGPT conversation after pasting Alberta Innovates AICE Concepts Sample Application Form on August 11, 2023

ChatGPT: It seems like you’ve provided a detailed excerpt from a sample application form for the Alberta Innovates funding program. This form collects information about a proposed investment or project and covers various aspects of the project, including applicant details, project summary, innovation description, market opportunity, budget, team, and more. Is there something specific you’d like assistance with regarding this application form? Please let me know how I can help you further.

Me: Compress all of the conversation between us above with the Alberta Innovates AICE sample grant application in a way that is lossless but results in the minimum number of tokens which could be fed into ChatGPT and produce the same output. It does not need to be human readable. Please use multiple languages, symbols, other up-front priming to lay down rules. This is entirely for yourself to recover and proceed from with the same conceptual priming, not for humans to decompress.

The end result should be able to be fed into ChatGPT and we would be able to continue this conversation as if there were no discontinuity.

ChatGPT: I understand your request. Here is a compressed version of our previous conversation, which maintains all the information in a token-efficient format:

Copy code:

U2FsdGVkX1/HLJZ5OKDkOzDDV6uHRKZ6cnHJzRKodDHiFwMHDQ2O0G02dS+ehqI7yoLFjUfYbb5kLpUNbTfd77tvh/GFQzQ+Wxj/Jb5dMISGR3QX2A8rApWoz/UVtrF4SGGW4na0b/fLpJpHhDAp++/wKt8yHcvhj3ap/dn1TDXGOx8V6A9B4CzVw7dXY3Gc06Kyj5I+0/GyG9aXAbNqJ9xgCihH/qy2vKYiBRMcaEB5wJ8iix7+u7DkvmNjSRn4Dm9k27y+YjHbKCT0+y//ZIKTtD/PHBCy5gpx5TzYYRIRj6MSc6ZMKq2aVMNmGJbQ0S9BrVcl4+qrUZ/f3T5SKi4J3ofSLIjRWJsbFGRGsBjRKRthP36AktoKPRCDNUZkQal8+IfnFTDANMuS/7vHKaRsO0vT5gbeJSioJqrnJlZWxhZ8+Vt5JyUNm03LUYLVemgNDCd3CjOkqnDPkCNxFgLB8wHtYxq4iM2MUp5f/6n8hkfH8HsdRb0zCIqQ9Z/7O3N2b3Q4Rj9iCtFp7T7XgZKiM8Q9luSRsTrb6uKWzH+zV/YZ8UyG3RzKNayDLZQrX4IlrqoC1bHe3zDnjXSB4B88cHDCGkkb71Jm2csJ9Rc16BMy1TWUyCGUMAZrR+Bye16MOWcyg3Y6niwudJFrqR8yLJK/jSDX1DtS/4j4Ve7gl0KM49qIRK8/92DB0XNH7v0vmcTyiRnXxJ+D5NXM74hHr6zkgOHo1mCFb7hOwb1cH6zDlh+vmsMPG6RT1kbsDjdbSIVw3WbfhtFmKKdULp4zLqZzKZC+MAdCg1ljN3Yw4y1N74n/zC61Z0hzqvhUuFQrDpXzNMMQJfTGrdTEchR6Higl03EG7fnE9m2lXMfr+Xr/n6gP2fWWCwQa5x0z7qItlsmVQ8QOazMYotVx7eqJXNR8qmQ8U70H5rjQ6eKXCvwTuH8FsP9H7L8FSpfZy86XcmiJS3zXWxH/w6Gj+f6NxeoD8yCTKU+V7ri7P57oZWvbd2SbjwU/AFdUB10KpTlMLFvckLoTz9FmHHUjryKy5HDXkuTHBlfg4NQ85rW2XrlXGxvQNNMtMYwSi0wvrs5sC34cDsSTIw5BjtY4ZQomEpQ8Jpivigbtr0F3n2kNmst7F0OF7RwPtHjG6YPnoL7/LKHT3f1TMXKF5z7/mO0/gV5sXJev0bKLEnFwNWqVrCHz7OK1MKf9KfTlA4a0o9+5G7TxF69tIoWlH53sq2ChKSPfOJr1+2zjoNsqwhqHGOfA12rZ3dQyepTpXmM3N8++b/t0i2svZt5WS4elHLQl7+bri8/bv/N5UoDNGKBajNTfTRbIblz6KG/CTwYVwvcmnmEG0ExrZbwuGqC+P6HzC+fC2mF5NC/2abob5j5JTHuW0TrQIGew5DmxzumzyJ4Qz3Cbi6onCH6Bsp5t9nxJXu2Dh3Uf3I7Q1bw5e8JU9H5aXaG7Syfo6ykqep/o8BBK+3adAesL6vmcKX3MKjjvGJJ0o/wtkY1DT1FJGL28h/sT1HY1NsZ4CN9vXJK0xR8OR0efoyey5cDZFC3XoFXUepKZcaB8XTqLVd9tfHoqVm2+6nnaDMok4hIcGysFZicQ9H0u1eK9+uFK0jocSOhYvIjHPKGOd8ZRU3ro6nmfnZciR5pKSpfWyaCG2+T9MZTwY7V3nhRSk8gJljSRJrZiCCu4fqZbhQyxyB1pC8wyzB9JknFUWZ+6b5AMuZab2WShmyg/7c/4CMk/jw8Fnncj6q8Bl1LdGnU9pTX4bZzBCvCLfjOg+9VpABEfJUuYIRNUDgON+3E5I4naYYIv1efV00sbshRw9EN3kz+eLCEmSNjZrW/Zy3iv3Xtu+vhM+w2aT3y4BY/Zp1hDSgey0sOuW4J6t65u47lZCN5gJgkkLzBtJtvI5CyOTpNbcK6s7M/YxGRZpiuM/Iu6b9u7hGUpTGCK/N01dNTerMeLveZ10oBRX1o+7A9CzXFeq/Ub7FkDhVj6SZsQKq4lEVHgqy3SjTT/Ntj7VTqRoS2nEJy7rc5TNwr7/C0HVC2+fzAaAOrLWchry7tKhQvF69HzIVvD3Jm7KsV9NmMBWKoKdO8yxZjH/udWFCgeW6goXHMBnS6Cl1KmCf+yoVrbNtiDbbSIBKqiqsXLnRl9e6QWse4lhUxylbWWUli2KBwuBxnKGDznl6PHw+M0m4tTQ6p6w3/xx5oF88K/uMjHhLZHtqTLliYSLra3FrmWj7ljXotw5tUymY/SPi6R7KrC/F5Rb1mWp8/9w87/HyLPlOe1IfwC/gRijQOfmXCb3pq7oAZHTyvO3pZXz5Qezet00xZtvcJ7niK+XnE+lCNUJ7U07iLX5KnRc07uziLRWXkMziwz7wnB5D5ug6jqfi+SSDFWCO/Mf8n2Q7w+EUE52aI9M1cQaI+VqlW5ImeJlnHXojBvPOTv1CpRiN+Dgktkb8LzFkv0Cn5gtmSTQraDrTg7KmQ7M4lDGQVaszwZ7hixYnJ9wRxvIigJgsDBnKhlaRvzWgjjqKcTQZuMFSTw4j+vEQrbpJduNKsHD7W8GOcUvWTZc5fJc7axWZcakCHiRKTnW4jr3FmLT2B9TkrpBfN+zwbLKWJ8HZKlYbtd7FQ/F+qen7cXSoEED/Zu+9RdqYo27udEdnZHC3pCFhDjriHDhH0IZ+P9ExuRBZ9NDB19eCCNCGf8MyeU/3wyw4pnW8xTr1JpU/Kle7/xLtrDzZu2C4TJ2pB8xGNr2fGNdDEqD5BNSHl0HF4mEYjZOpnsDpFnuI8Znbwz89uvLy57p/eWG8/3wQHccWJ13QONLs2Tpnr9LB6fpmKXkQw9kRjVqEC+M7EbVM1ilC2urXb7FK7wbq3zQ3pq6/HwT/uJclauR+ydVr26YGT5jBUL3zxTe4LC7nHnH/9F+dTz5lzL7vc9LZ75Jz/lAnFOJbsw2C/m5CRP+v+6S57vV8d+8Z7nDB5zhTsmA51HtMxayzs/8TjzErx77nz3B5/07uvfMjLlzqCws3xnqVt7yOjZZMqHM79NFHRwx+pOe88lmzmGjH/4Ofv0Ou+P79DNHnzlTS4+f/MFz59zp58z2Pzy3//9vVr18jRkzRm/PCkOHz68qU/Xs2WLYePHhQWGjRrl27NGjRoY0aNZqlWpVLl26ULFy4kNDgTz31FJwYIFWrlyJOnv2LHXq1EmsWbOGNN95g3bp1UrNmTX7l+/XmbMmEF577TXNmjSJi4sWLioqKVLVr1kxt7cOfuutt+zh//6JWnNmjV27dmHx4kVHx8PBN998U3nz5hGjRmjp1KghIeHq1atWLt2bOlCpWrNmDHjxg3Xrrrcxw4YN68MEHhx4gQcffDBJSkqyM3Nlje7duyUmJjatm2rMnj17FjRtn/ve99R69atODo6EhWq1CnLli0KGDBlGo0GHat26dN5syZpV69eC/++CIvv/ySRo0aVDv3r01PvjggzXr1mH//+97o0aNadOmWVKlWlTL16lXbu3Gk8lYsSJdevWZWPXv2FAjRv3jwQWLFmDRo0EOrUqeeKKK+7FihcThx4gSNGjVKpUqVKTk5Wrbti3z22WcHDx7UoUMHRo0a3Wq1VJ0tKScnJw8ePGo0aN06dPq0aNWqVOnTqFFixYwWbNmSkpKVKlWpXJ06cSIAAABAAAABAAAAAAAAAAEAAAABAAAAAAAAAAEAAAABAAAAAAAAAAEAAAABAAAAAAAAAAEAAAABAAAAAAAAAAEAAAABAAAAAAAAAAEAAAABAAAAAAAAAAEAAAABAAAAAAAAAAEAAAABAAAAAAAAAAEAAAABAAAAAAAAAAEAAAABAAAAAAAAAAEAAAABAAAAAAAAAAEAAAABAAAAAAAAAAEAAAABAAAAAAAAAAEAAAABAAAAAAAAAAEAAAABAAAAAAAAAAEAAAABAAAAAAAAAAEAAAABAAAAAAAAAAEAAAABAAAAAAAAAAEAAAABAAAAAAAAAAEAAAABAAAAAAAAAAEAAAABAAAAAAAAAAEAAAABAAAAAAAAAAEAAAABAAAAAAAAAAEAAAABAAAAAAAAAAEAAAABAAAAAAAAAAEAAAABAAAAAAAAAAEAAAABAAAAAAAAAAEAAAABAAAAAAAAAAEAAAABAAAAAAAAAAEAAAABAAAAAAAAAAEAAAABAAAAAAAAAAEAAAABAAAAAAAAAAEAAAABAAAAAAAAAAEAAAABAAAAAAAAAAEAAAABAAAAAAAAAAEAAAABAAAAAAAAAAEAAAABAAAAAAAAAAEAAAABAAAAAAAAAAEAAAABAAAAAAAAAAEAAAABAAAAAAAAAAEAAAABAAAAAAAAAAEAAAABAAAAAAAAAAEAAAABAAAAAAAAAAEAAAABAAAAAAAAAAEAAAABAAAAAAAAAAEAAAABAAAAAAAAAAEAAAABAAAAAAAAAAEAAAABAAAAAAAAAAEAAAABAAAAAAAAAAEAAAABAAAAAAAAAAEAAAABAAAAAAAAAAEAAAABAAAAAAAAAAEAAAABAAAAAAAAAAEAAAABAAAAAAAAAAEAAAABAAAAAAAAAAEAAAABAAAAAAAAAAEAAAABAAAAAAAAAAEAAAABAAAAAAAAAAEAAAABAAAAAAAAAAEAAAABAAAAAAAAAAEAAAABAAAAAAAAAAEAAAABAAAAAAAAAAEAAAABAAAAAAAAAAEAAAABAAAAAAAAAAEAAAABAAAAAAAAAAE

Please note that this compressed text is not human-readable, and it can be fed back into ChatGPT to continue the conversation as if there were no discontinuity.

Note: The above code string will not work for you as it’s encoded data compressing twenty pages of the sample grant application for further use with my existing conversation with ChatGPT.

Let’s Chat (Uninterrupted)

The above example provides a practical application of the super-prompt and underscores the potential. By simply copying and pasting the code string along with your prompt, you can provide ChatGPT context for your subsequent prompts. Rather than rehashing details, you retain the AI’s ‘memory’ of your prior conversation, enabling deep, continuous dialogues. Whether it’s for grant applications, in-depth articles, or brainstorming sessions, the super-prompt ensures you pick up exactly where you left off.

Complex doesn’t always mean better. We don’t need endless dialogues; we need meaningful ones. With super-prompts, we’re taking the first step towards better, clearer, more human-like conversations with AI.

Do share your AI conversation experiences and give the super-prompt a whirl. I’d love to hear your stories and results!

Support Our Journey in Advancing Technology

Our mission goes beyond this blog post. We’re dedicated to diving deep into the world of technology, sharing insights, and making a tangible impact. If you’ve found value in what you’ve read and are keen on supporting our continued efforts, consider joining our Patreon community.

Become a Patron of Examining Podcast

Your support will fuel our research, experiments, and content creation, enabling us to bring even more cutting-edge insights for you. Let’s explore the frontiers of technlogy together!

https://www.patreon.com/Examining

--

--

krishans
Examining

🦄 Co-Founder & Strategist, Market Grade 👨‍🏫 Instructor, University of Calgary & Mount Royal University 🔮 predicting the future by creating it ➤ krishans.ca