Chat GPT API : Add memory and context of discussion

Agri Kridanto
6 min readFeb 19, 2023

--

Hello Worlds, I hope everything is great on your end. Today, I want to share about the possible solution on how to add memory and context of discussion while we access ChatGPT API.

In the previous article, I shared the idea of integrating Telegram Bot, Laravel, and ChatGPT. I think this idea can also be implemented for other chatbots and other frameworks, not only Telegram Bot and the Laravel Framework. However, when developing the bot prototype, I encountered a significant issue, this ChatBot did not recognize the context of the conversation between the user and itself, which was quite unsettling. The solution I proposed here is a workaround and has high-cost consequences when accessing the API.

Sample 1 — Discussion on World War 3

The picture above is one of the snippets of my discussion with bots about World War 3. Take a look at the text in a red box. “Based on your learning, is there any person or institution that will predict that?” But the bot doesn’t recognize the context of the conversation and instead answers about the stock market, LOL.

After I replied, “No, I don’t talk about the stock market. About the World War 3”. He returned to the topic of World War 3, but still, he did not recognize the context of my question about the person or institution that predicted World War 3.

Sample 2, Discussion about Covid-19

Another example is when we talk about Covid-19. In the first picture above, I asked the ChatBot to name the total suspects in Covid-19. (Previously he gave a list of 10 countries with the most incidents). Sorry, it shouldn’t be suspect. My bad.

Back to my discussion with the ChatBot. “How about Indonesia?” He explained the description of Indonesia. Very good explanation about my country. But, out of context.

Indonesia is the world’s fourth most populous country, with over 270 million people. It is a diverse nation with more than 700 languages spoken, and is home to some of the world’s most beautiful beaches, jungles, and volcanoes. Indonesia is also a major economic power in Southeast Asia, with a large and growing middle class. — ChatGPT describe Indonesia

It looks like, this problem was also discussed on the StackOverflow forum, here is one of discussion about it : python — Openai API continuing conversation — Stack Overflow. The solution that is quite helpful from user Special1st described in picture below.

possible solution on how to add contex of discussion on openAI API

The idea is that we include the questions and answers from the previous talk, so that the OpenAI API understands the context that we are talking about. I’m trying to implement the solution on a ChatBot that I’ve developed.

Here’s the pseudocode:

  1. Users make requests (Question/Q) and Open AI provides answers (Answer/A) via the API .
  2. Save the Q&A pairs into the database and include the timestamp.
  3. Every time there is a new request/query from the user, get the Q & A pairs from step 2.
  4. Limit the number of records (eg 3 records) and sort them in descending order.
  5. Setup prompt when Request to Open AI API with data we have collected in step 4 .
  6. Repeat number 1

The following is the table structure that I’ve implemented in the MySQL database to store conversation records between users and openAI via Telegram Bot.

table structure to store user and ChatBot conversation

I hope it’s clear enough for the pseudocode and the usage of the table above. Here’s a snippet of my code implementation in PHP. In below code, I store Q&A records in the tGptChatResponse Model to access the table we have set up to store the Q&A records (Line 7). Next, I iterate the records to construct the prompt (the $context variable) before I access the OpenAI API on Line 13.

Snippet code to implement the solution above

After We tried implementing the code, ChatBot understood the context of our conversation, even though it was only limited to the last 3 conversations (according to the code above, Line 3). The following is an examples of our discussion.

Sample Result, discussion about Ir Soekarno, first president of Indonesia
Sample Result, discussion about Switzerland

Just a little intermezzo, there’s something weird about the last answer above when discussing about Switzerland. “That is great from you. How do you know history? Here’s the answer. LOL.

I have studied the history of Switzerland extensively, both in school and on my own. I have read books, visited museums, and spoken to people who have lived in the country for many years. I also keep up to date with current events in Switzerland, which helps me to understand the country’s history and culture. — CHATGPT ‘S ANSWER ON LEARNING HISTORY ABOUT SWITZERLAND

But never mind, we must be aware that chatGPT can help us, but of course it also has many drawbacks 🙂. Please, do not depend too much on this thing !!

Talking about the drawbacks, back to the solution we’ve proposed, we can see that the ChatBot can understand the context of the conversation. However, this solution is very expensive, because it is related to the token that we use when accessing the service. Below is a screenshot of my Token Usage when using the ‘solution’.

sample of Usage of Tokens after implementing this solution

We can see access to the API service, the prompt is the request we send to OpenAI API, while the completion is the response from the openAI API. The prompt that we give is almost 4 times of the completion. The total tokens used reached 1628 tokens.

As a comparison, here’s an example of using tokens before I implemented the ‘solution’ above. The prompt that we use, is according to the request written by the user.

sample of usage token before implement this solution

The comparison is quite stark at access prompts because the solution above includes a pair of Q&A on subsequent requests. And it must be remembered, there is a max token limit when we make requests to the OpenAI API service, for example, text-davinci-003 has a maximum limit of 4000 tokens for each request. Other models, text-curie-001, text-babbage-001, and text-ada-001 has 2048 token max in each request. Therefore, I’m only including the last 3 conversations on the solution above.

As an additional reference, you can access the pricing model and usage of tokens on the following page: Models — OpenAI API and Pricing (openai.com)

Max Request for each GPT-3 Model
Pricing of base models

For example, if we use the DaVinci Model (text-davinci-003) we are limited to 4000 tokens when requesting the openAI service, and the price per 1000 tokens (about 750 words) is USD 0.02.

I think that’s enough from me. I hope this article can provide benefits and insight for us. Please follow me on Medium and share this post if you feel this article is helpful. If you have better or optimal solution of this problem, please kindly to share it :).

Any questions or ideas ? You can also write in the comments section or contact me via LinkedIn. Thanks for your time and I’ll see you in the next posts. Have a nice day !!

--

--

Agri Kridanto

I'm a software developer with skills in web & mobile development. Proficient in PHP, Javascript and Java. Also experienced in IoT & Machine Learning research