Important lessons for ChatGPT users

Mark Monfort
DSAi
Published in
7 min readMay 18, 2023

Let’s talk more about ChatGPT, the large language model that has taken the world by storm these last couple of months. We’ve learnt a lot about it over that time and it has definitely been one of the most rapid-paced tech adoption examples we’ve seen. In fact, it should just be showcased as THE EXAMPLE of what it means to be exponential. Despite this uptake, not everything is perfect. It’s important to realise that there are a lot of ways that things can go wrong when using this product and even though it’s outputs may seem like witchcraft to some, there are a lot of amazing things you can do with this if you understand a few basics. Like any new technology, it does not come only with advantages.

In this article, we’ll talk about prompting. This is more than just typing what you want ChatGPT to answer, there’s actually an artform to this and if you think about ChatGPT as an assistance tool, it suffers from a case of garbage in garbage out too. Here are 3 things to keep in mind when you’re next prompting as well as some examples that you might find useful.

  1. Consider your Biases (and prompts to help get around this)
  2. Context Windows
  3. Privacy

1. Consider alternative opinions and biases

Many people can get it very wrong with output that just confirms existing biases. They get this if they just prompt without consideration of alternative opinions. This can be dangerous because ChatGPT is a tool that not only has hallucinations, but it’s also a tool that has the strong potential to just agree with your line of questioning as the conversation continues.

To get around this, there’s a need to look at alternative opinions and options to the point of view that you’re going in with when you ask your questions. You could do this after you ask your initial question and you get a response. You could also try a prompt like this that I made up as a way of suggesting improvements to an original prompt.

The user can take this further and get a suggestion of a better prompt to create as well as some feedback on how the prompt could be made better.

The IPP Prompt and Response

We then can ask it about the GFC and what caused it and we get the main response

But in addition to this, we also get an improved prompt and ways to improve the original prompt

Another limit is in terms of the line of questioning that you may have about a particular topic, especially when it’s new. If you’ve never learnt about something like say physics, you might ask a high-level question, but then not be too sure about what a logical next question is even after reading through the response. If you use ChatGPT and freestyle your conversation, then that’s like a choose your own adventure novel which is fine. But if you want something that is a bit more logical and based on what ChatGPT knows about particular topics, then there is another prompt where you can ask for the logical next question.

The NLQ Prompt and Response

We then ask it about a complex topic like fractional reserve banking and in addition to a response, it gives us the NLQ (Next Logical Question). Copying that in as the next prompt takes us down what the ChatGPT model recommends as the logical path

2. Context Limits

Another thing people miss is that there are context limits to the questioning you can have. First of all, there is a limited amount that you can insert in your prompts. Additionally, there is the fact that your prompts in a continued conversation also include the answers that ChatGPT outputs.

This context window moves as you lengthen the conversation and that can lead to forgetfulness of the model about earlier things that you spoke about. To limit this, a useful trick is still limit the output when you’re adding contacts across multiple prompts that you are inserting. This might be something that you’re doing for a business idea or some other sort of assignment.

There’s a way you can try to limit the output (and keep your context window open wider) and that’s by asking for the out put to be limited. You can ask ChatGPT to do this by asking it to output words like “READ” or “ACKNOWLEDGED”.

Here’s an example from loading data into ChatGPT about some services we have at NotCentralised which forms part of a wider conversation loading in details about what we do which can be interrogated further later.

By doing things this way, we are able to limit the context window sliding down and can have more relevant data “understood” by ChatGPT as part of the conversation.

3. Privacy

Now let’s get into Privacy and the various ways you can improve upon the base model which sees people inputting information into ChatGPT without realising that this is training the model. This isn’t great if you’re a business and your employees are feeding state-secrets to a public AI model. So, there are at least 5 ways we can look at to help you be more private with your data.

Option 1

Refers to the new feature on ChatGPT allowing you to turn off your data being part of the training sets. First of all you go to settings (bottom left) and then data controls and then you can change your data controls to switch off your data being part of history and training. Issue is that this turns off both. Not great if you want privacy but don’t want to lose your history.

Option 2

The next option is to go to Help and FAQ (see above), then Data Controls FAQ then scroll down to this area where you’ll see a link to a form to fill in. This form tells you that it will stop the data you input to ChatGPT from going into the model but only for the new conversations (from when you implement this measure) and that there may be performance limitations by going down this path.

Option 3

Another option would be for users to just keep out any names of clients or other sensitive names or to even describe situations generally so that the model can’t be used to associate those terms. E.g. you describe a situation with Client A or Company B etc

Option 4

This option requires some development capabilities (perhaps something you can even teach yourself with ChatGPT) and would see the connection directly to OpenAI via APIs. You need to go through the appropriate steps and permissions to do that however, it gives you all the controls you need to build a ChatGPT model that is trained on your own data and which does not go into public training sets.

Option 5

The 5th option is to consider 3rd party tools that take care of building your own API connections (option 4) as a service that you can just subscribe to. This makes it easier for businesses to upload their documents and have specific models training on just your info. One such example is from CodyAI https://www.meetcody.ai/

You can also read about these options here in this Twitter thread https://twitter.com/CaptDeFi/status/1654658722973749248

Conclusion

Those are just a few things that I think are really important to think about when you are using ChatGPT and this is only with the base model (paid or not). There’s a whole other topic around the plugins and the other ways that you can use and extend ChatGPT but we’ll save that for another article.

Hope this helps and hit me up if you want to learn more.

--

--

Mark Monfort
DSAi
Editor for

Co-Founder NotCentralised — data analytics / web3 / AI nerd exploring the world of emerging technologies