Why I think we should not use confirmation bias on machine learning yet

Confirmation bias is when we select certain facts/information, as so we can arrive to a conclusion. This is natural to us, and may have been the result of our evolution. It saves us from thinking. For some people, handling even two contraditory ideas can be overwhelming. Then I brain may have evolved to make it easier for us: once you pick a side, the brain will automatically convince you that those facts on your favor are better, and against are wrong. This cognitive bias is widely studied, and the most famous one amogst the several cognitive biases found.

The first time it occurred to me the topic was as I was preparing for a paper I was writing.

“If we make an analogy to human: gpt-3.5-turbo-1106 on this case specifically did not fall into confirmation bias. It seems based on observations during the last conversation using gpt-4–1106-preview: their GPT 4 model seems to be less likely to fall into confirmation bias, as we use it here (the model uses SnakeFace mistakes to confirm their initial guess).” SnakeFace

I believe is not the case to use confirmation bias yet on Large Language Model, even though the phenomenon can be alike. It is not new those usage. We call probability the number released from those models, even though it is not a probability on the statistical sense. It causes misinterpretations, which I have seen. Probability is a well-defined term, with theories and rigorous proofs.

In my paper it is said:

Making an analogy to human psychology: here their algorithm felt into confirmation bias.

It is an analogy, care should be taken for avoiding wrong usage of the term. Even though confirmation bias gained the public discussions, it is a well-defined term in psychology.

Full discussion

See the comments for more context.

--

--