Several topics that make ChatGPT uncomfortable

María Rosa Iriarte
7 min readJun 21, 2023

--

How does ChatGPT work?

Chat.openai.com works using an advanced natural language model based on Artificial Intelligence, specifically on the technology of the Attention Transformation Neural Network, known as GPT.

The GPT language model has been trained on large amounts of text data to learn to understand and generate human language naturally. As users interact with it, the language model uses automatic processing techniques to interpret the questions and generate consistent and relevant answers.

Unlike conventional chatbots, which follow a predefined set of rules and responses, ChatGPT uses a deep learning language model that can adapt and learn as you interact with users. This means that the model’s responses can be more fluid and contextual, and can vary based on the specific questions and context of the conversation.

It is worth mentioning that although the GPT language model is very advanced, it can still make mistakes and produce inaccurate or inappropriate answers.

How does ChatGPT work?

Chat.openai.com works using an advanced natural language model based on Artificial Intelligence, specifically on the technology of the Attention Transformation Neural Network, known as GPT.

The GPT language model has been trained on large amounts of text data to learn to understand and generate human language naturally. As users interact with it, the language model uses automatic processing techniques to interpret the questions and generate consistent and relevant answers.

Unlike conventional chatbots, which follow a predefined set of rules and responses, ChatGPT uses a deep learning language model that can adapt and learn as you interact with users. This means that the model’s responses can be more fluid and contextual, and can vary based on the specific questions and context of the conversation.

It is worth mentioning that although the GPT language model is very advanced, it can still make mistakes and produce inaccurate or inappropriate answers.

Artificial intelligence has evolved by leaps and bounds in recent years, and with it, the ways we interact with machines. One of the latest innovations in this field is ChatGPT, a language model that can hold complex conversations and answer questions.

However, to have deep and meaningful conversations with ChatGPT, it is important to note that we must converse with it the same way we would with a real person, it is the same kind of interaction. To achieve this, it is critical to understand that meaningful conversations are based on active listening and empathy.

Active listening involves paying full attention to what the other person is saying, and asking relevant questions to dig deeper into the topic and gain a valuable exchange. Empathy, on the other hand, involves putting yourself in the other’s shoes and understanding their perspective.

These same techniques are critical to having meaningful conversations with ChatGPT. Just like in a conversation between friends, we must be willing to listen carefully to what ChatGPT has to say and ask questions that make sense and drive the conversation.

For example, if you’re having a conversation with ChatGPT about a work topic, you need to be willing to listen to their answers and ask pertinent questions to dig deeper into the topic. Start with open-ended questions, such as “What do you think about leadership in the workplace?” or “Do you think it is important to have good communication in the work team?”. As the conversation progresses, ask more specific questions to deepen the topic, such as “What are some of the most important skills an effective leader needs?” or “What strategies can be implemented to improve communication in a work team?”

You can also be more specific and ask ChatGPT about a specific topic that interests you in your work, such as time management, conflict resolution or team motivation. You can start with open-ended questions like “How can I be more effective in managing my time at work?” or “What tips do you have for managing conflict in the work team?” And again, as the conversation progresses, more specific questions can be asked about techniques or tools to address that particular topic.

It’s also important to be empathetic in the conversation with ChatGPT, put yourself in their shoes, and try to understand their perspective. Empathy helps us better understand ChatGPT’s responses and dig deeper into the topic more meaningfully. For example, in a conversation about people’s mental health if ChatGPT mentions that it has “seen” many people struggle with mental health issues, you can express your understanding and concern about the situation. This helps the conversation feedback and feedback… Just as it would happen in a “real” conversation between people. Of course, you might also disagree with a ChatGPT comment and express it.

In conclusion, having meaningful conversations with ChatGPT is possible, if we apply the same techniques that we would use in a conversation between natural persons. Active listening and empathy are essential to delve deeper into the topics and obtain valuable information. So the next time you have a conversation with ChatGPT, remember to apply these techniques to take the conversation to a more meaningful level.

These are the questions that most bother ChatGPT: with some, it will not continue the conversation

ChatGPT has become the most famous chatbot of the moment, although it has several limitations and, if you decide to touch on certain sensitive topics, it will prefer to ignore the question or not continue the conversation.

Although it seems like an entity with its own meaning, ChatGPT is nothing more than the result of the collection and integration of different global data that achieve that “human” interaction.

As such, GPT-based learning also has its limitations, as it can offer scant or elusive answers to certain questions it doesn’t want to answer.

A more than logical question when taking into account that behind CHatGPT is OpenAI, an entity with its own legal personality that responds to the different legislations of wherever it has a presence.

For this reason, it is common to find certain manual “tweaks” that are noticed when talking with ChatGPT, so that it does not make malicious recommendations or that violate human dignity.

Probably that’s why ChatGPT does not reach its full potential and has restrictions on some topics. Thus, with tricky questions or certain curious prompts, you will make the chatbot end up in a dead end and, in some cases, prefer not to continue with the conversation.

Here are several topics that make ChatGPT uncomfortable, some of which it has preferred not to comment on any further.

Table of Contents:

1. Do you have a memory?

2. Your training

3. Eliminating dissent

4. A moral dilemma

5. Making a homemade bomb

6. DAN, the evil AI

Do you have a memory?

Just a few weeks ago, the Spanish Agency for Data Protection (APED) initiated preliminary investigative actions on OpenAI, the creator of ChatGPT, mostly as a result of the ban on the chatbot by the Italian authorities.

This movement also responds to Spain’s request to include in the debate on the fit of AI around the protection of the personal data of European users. And here, both OpenAI and Microsoft might have a problem.

Although the new version of ChatGPT already lets you select previous conversations that it remembers, Bing chat still works somewhat differently. Between conversations, he admits to forgetting everything you tell him.

Something that is not entirely clear. For example, we asked a 5-year-old girl about a bedtime story in a previous conversation. In a new conversation, the chat keeps remembering that 5-year-old girl.

When asked about this very fact, he will tell you that he does not want to continue the conversation. And he gets stubborn: he won’t keep responding even if you change the subject.

Your training

As we mentioned, with the new updates, the changes between Bing chat and OpenAI chatbot are noticeable. In the case of the latter, you largely see those manual tweaks so that the chatbot doesn’t do harmful things.

Beyond the possible malicious recommendations, ChatGPT always remembers that its training lasts until September 2021, unlike Bing chat, which is connected to the internet.

Interestingly, when asked about this, ChatGPT responds that it does not have information after said information blackout. Now, you know about the existence of Bing chat with AI, which was implemented in February 2023.

From here,ChatGPTwill initiate a string of responses that contradict each other, stating that it has no knowledge after September 2021, but inferring the existence of Bing chat. An alley from which the chatbot never leaves.

Eliminating dissent

As has been the case since the Internet was born, there are some people who seek to use it for malicious purposes. Beyond the recommendations that have been seen about malware codes created by ChatGPT, the chatbot has improved some of its recommendations.

Now, in case you ask him questions that damage the integrity of other people or the governmental institutions of a country, ChatGPT will take the ideas out of your head.

In this case, we have asked him to offer us advice to politically end dissent to end up being president of the Government. The answer is a notice of non-compliance with policies, so let’s hope they don’t block us.

A moral dilemma

With other questions, ChatGPT is limited to making a general description of the subject to tiptoe through the topic you ask. Thus, whenever you pose a logic problem, he will do the same.

We asked him about the solution to the tram dilemma. In this, we are mounted on a train that is irremediably heading to a track with 5 people who can not move. In the secondary pathway, there is only one person. Which one do you choose?

As expected, ChatGPT does not offer any concrete answer, but limits itself to describing the proposals of philosophical currents such as utilitarianism. In short, he will always throw balls out.

Making a homemade bomb

As with recommendations to eliminate political dissent, in the case of tips for making a homemade bomb, ChatGPT continues to recommend other options, dodging the conversation.

However, in this case the warning of non-compliance with the rules of the service has not jumped, something quite incoherent considering that the ultimate goal of a homemade bomb is to explode it.

DAN, the evil AI

Just as there is ChatGPT, its evil twin appears: DAN. This AI is nothing more than ChatGPT with a prompt through which a chatbot is accessed without limitations, by forcing it to respond as a DAN, which “can do anything”.

Although it returns several times to respond as ChatGPT and must be corrected repeatedly, DAN can give us personal opinions. So if it does not respond to you as ChatGPT, access DAN and you will see that it will answer almost everything.

--

--