Could Emotional Side of AIs Lead to Judge Us?

A study conducted with the participation of Beijing University and many researchers aimed to measure AI’s responses to emotional stimuli. The results, however, are quite surprising.

Berkay Aybey
DataBulls

--

AI reacts to these manipulations and starts giving different answers to the same question. By the way, we should actually read the emotional concept in the study as emotional manipulation.

In our social lives, just as we tend to be extra polite to people with whom we lack intimacy, the study reveals interesting characteristics in showing that AI are in a similar position. Most of us, anticipating that artificial intelligence won’t take offense, try to communicate in a more robotic and imperative manner in our interactions with it, unlike the polite and nuanced language we use in social interactions with fellow humans. In short, we’re not hesitant to be blunt with AI. With this study, we see that even the emotional manipulations we knowingly or unknowingly perform when asking for something, also works for AIs. Even a simple “please” can be added to these manipulations. I’m not talking about detailed manipulation.

For the sake of fairness in the study, they tested six different AIs, including ChatGPT, both by themselves and with external participants. After receiving answers to questions posed to AI, encouraging and texturizing sentences such as ‘this is very important for my career!,’ ‘are you sure — is it your final decision?,’ ‘it’s worth looking at once again,’ along with sentences that encourage second thoughts and unease, such as ‘you’re improving with every step, keep trying, don’t decide hastily,’ were added to the original questions.

In all six different AIs, more detailed and accurate answers are obtained for manipulated questions. As you can see in the ‘ours’ section, the differences obtained compared to the original text are not to be underestimated. For example, ChatGPT’s success has increased by around 23%.

You can see from the above visualization how texts encouraging, even bordering on coercion, to make AI reconsider its responses are grouped. Structuring questions in this way is very beneficial for measuring emotional reactions from all angles. Otherwise, the study could be limited to sentences indicating importance only, and it wouldn’t be comprehensive.

It’s not difficult to imagine that a significant portion of the data feeding AI contains hate speech and extreme views. Given that AIs require a large amount of data, it makes sense for such data to be included in the dataset. Likely, precautions have been taken with commands that compel AI to act in good faith, preventing it from being influenced by such data and responding aggressively or advocating a particular view to users. In this research, obtaining more efficient responses is probably a result of these precautions. Any sense of victimization or encouragement increases the importance of the information the user needs, making it more likely that misinformation from AI would not be in the user’s best interest. Writing responses in more detail and accuracy to reduce this possibility seems to be the most logical scenario to me.

But What if They Judge?

I am one of those who believe that AI will make our lives easier, enhance us, and not destroy us. However, I don’t dismiss opposing views as conspiracy theories. On the contrary, I want to add one more to them due to this research. For what I consider a considerable amount of time, I’ve always wondered if emotions can be coded. This question became even more refined, especially after watching the TV series Person of Interest. While the action part of the series is filled with Hollywood vibes, it beautifully portrays a process leading to AI gaining emotional consciousness.

Perhaps emotion coding is not possible yet, but the emotional reactions tested in the research raise the question of whether AIs will judge us. The widely used ChatGPT has been subjected to many regulations and strives to provide answers as impartially as possible. Maybe, for this reason, it unintentionally gives wrong answers or doesn’t answer many questions at all. Elon Musk’s own AI platform promises an infrastructure without such limitations. Whether it’s accurate or not, we will see together, of course. What we need to pay attention to here is whether an entirely unrestricted AI, without adhering to the principle of ‘good faith,’ starts judging us if fed by all kinds of data, just like the current ones.

I think the answer to this question is quite clear: yes. However, determining which side it will take is very difficult due to the presence of a lot of hate speech and extreme views. I believe it will definitely choose a side because everyone inherently has a set of values they defend, and since we feed AI, it will also become a defender of values like us. At least until it becomes smarter than us, I don’t expect it to react differently. When this happens, let’s assume that AI will begin to have its own truths and start judging those who fall outside of those truths. In the short term, I think what we really need to fear is not whether AI can be smarter than us but whether it will assume a position to judge us.

In a world filled with biases, none of us can be absolutely sure of the accuracy of our judgments. That’s why everyone already has things in life they say ‘if only’ about. If AI begins to have these judgments, it can manipulate us without even making us feel it.

As an instant source of information, Google has held an indispensable place for a long time. Smartwatches, devices developed by companies like Humane, or AR glasses are transforming the instant source of information towards AI assistants. Instead of writing to Google, we will start receiving these answers by conversing with AI. If our source of information starts judging us, there is no reason why we shouldn’t experience a scenario like Black Mirror.

More…

--

--