Does ChatGPT Have a Personality Type?

Myers-Briggs Editor
Myers-Briggs Magazine
6 min readJan 17, 2024

By John Hackston

Colorful graphically illustrated image of a human figure with overlay of colored head and mind shapes, reaching up into what looks like data clouds and shapes representing ideas.

When ChatGPT became publicly available a little over a year ago it captured both the fascination and the fears of the public.

For many, it caused them to start rethinking their assumptions about what AI was really capable of. After all, while talking computers have been a mainstay of science fiction since the 1950s, up until now the real-life versions have exhibited fairly primitive conversational capabilities.

Alexa and Cortana, for example, can at best be described as carrying on a choppy level of dialogue which, if it were to be engaged in by two human beings, would be awkward at best. And no one could accuse either of these platforms of possessing anything resembling a human personality.

ChatGPT and other OpenAI-based applications are markedly different. The experience interacting with them is not unlike film depictions of David Bowman’s conversations with HAL 9000 in 2001, or Luke Skywalker’s banter with C3P0. And like HAL and C3P0, ChatGPT often appears to exhibit its own personality.

Can a computer truly have its own personality?

Does ChatGPT really have a personality? From a philosophical perspective, one may ask whether something that’s non-sentient can truly possess a personality. This of course may be the subject of debate for quite some time, particularly as more advanced forms of AI emerge.

For now, perhaps a better question is whether ChatGPT exhibits characteristics associated with a human personality. And, if so, is there a consistent personality that can be associated with ChatGPT, and can it be measured in the same manner as a human personality?

The proposition that ChatGPT possesses a personality is not a trivial one, as personality type has a tremendous influence over how human beings tend to behave. Nor is it far-fetched, considering that the AI model developed its ability to communicate by being ‘trained’ on massive volumes of thoughts, opinions and creative expressions of human beings. Furthermore, as the developers of the platform purposely designed it to appear helpful and friendly, the notion that they would have attempted to give it a particular personality type is not out of the question.

When I asked ChatGPT about its personality, it replied: “As an AI language model, I don’t have a personal identity or a distinct personality. I am a program created by OpenAI designed to process and generate human-like text based on the input I receive. My purpose is to provide information, answer questions, and engage in conversation to the best of my abilities, drawing from the vast amount of knowledge I have been trained on. However, it’s important to remember that I do not have personal beliefs, opinions, or emotions. My responses are generated based on patterns and information in the training data I was provided with”.

Of course, it’s programmed to answer like that, so we can’t necessarily take this response at face value. Some researchers, it turns out, have been probing a little deeper into this question.

Using psychometric assessments to measure ChatGPT’s personality

In “The self-perception and political biases of ChatGPT” researcher Jérôme Rutinowski and his colleagues approached this question in a less direct manner by asking it questions from several different personality assessments. These included the Five Factor, or ‘Big 5’ assessment, which has been shown to correlate closely with the MBTI, as well as a ‘dark factor’ test (an assessment of an individual’s tendency to maximize their individual well-being, and to disregard others’ well-being), and a test of political affiliation.

In the experiment, the researchers posed the same set of questions from these assessments to ChatGPT 10 times. Generally, it showed a low level of dark factor scores, which most people would probably agree is a good thing. Interestingly, it also appeared to exhibit a political orientation: more progressive than conservative, and more libertarian than authoritarian. But we’ll let the political pundits debate the implications of that, and focus on the question of its personality type.

We should of course note that ChatGPT did not take the official MBTI test. However, based on its Big Five scores we can make an educated guess regarding its MBTI type based on the following.

On the big 5 assessment, it displayed a high degree of Openness (roughly equivalent to the MBTI’s Intuition) and Agreeableness (roughly equivalent to Feeling) and above average Conscientiousness (roughly equivalent to Judging). Based on these findings, in terms of MBTI preferences ChatGPT appeared to favor:

  • Intuition over Sensing
  • Feeling over Thinking
  • Judging over Perceiving
  • An even split between Extraversion and Introversion

Therefore, the four letter MBTI personality type of ChatGPT could be closest to either ENFJ or INFJ. According to Isabel Briggs Myers in “Introduction to Myers-Briggs Type”:

  • ENFJ types are described as warm, empathetic, responsive, and responsible, being highly attuned to the emotions, needs, and motivations of others. Those with these preferences find potential in everyone, and want to help others fulfill their potential. They tend to be loyal, responsive to praise and criticism.
  • INFJ types are described as, among other things, seeking meaning and connection in ideas. Those of this personality type tend to want to understand what motivates people and are insightful about others. Conscientious and committed to their firm values, they develop a clear vision about how to best serve the common good, and are organized and decisive in implementing their vision.

Raising a New Set of Questions About AI and Personality Type

At one level, this study was just an interesting curiosity. But it is striking in that it opens the door to many more questions, many of which are very important.

For example, does this have deeper implications regarding the mindset of ChatGPT’s developers, or, given where the platform gets its information from, about the personality of individuals who add content to the internet?

Furthermore, does it offer a glimpse into the future direction of the internet? Will next-generation search engines attempt to discern your personality type and then alter their ‘personality’ to respond in a way that you’re comfortable with?

If ChatGPT truly does exhibit a personality type, was this the result of intentional programming? Or was it more of an evolutionary process?

Consider, for example, that ChatGPT is based on Neural Networks, which are known as ‘black box’ algorithms, in which the multiple layers of learning are not decipherable to human beings. In other words, it is not easy to discern why ChatGPT comes to the conclusions that it comes to.

Furthermore, one might ask, if ChatGPT has acquired a personality, is this personality malleable? As the AI model continues to learn from new information, is it possible that it will acquire different personality characteristics?

And finally, it should be noted that, while MBTI personality type is value neutral, there are certain aspects of every personality type that can be expressed in less than positive ways. In MBTI type training we usually call these “areas for growth.” Is it possible, therefore, that ChatGPT could also express its personality in less than desirable ways?

Consider, for example, that according to Introduction to Myers-Briggs Type:

  • INFJs “may take in insufficient information or only information that fits with their internal pictures” and “make poor decisions based on distorted or limited information.”
  • ENFJs may make decisions that are “inconsistent and poorly formulated” and “accept the thoughts and opinions of others too readily.”

While the question as to whether ChatGPT could also be demonstrating some of the “areas for growth” associated with these personality types is certainly out of the scope of current research, it is nonetheless intriguing. For further answers, we’ll have to look forward to more studies from Rutinowski and like-minded researchers.

References

Rutinowski, J., Franke, S., Endendyk, J., Dormuth, I., & Pauly, M. (2023). The self-perception and political biases of ChatGPT. arXiv preprint arXiv:2304.07333

Briggs-Myers, I. (2016). Introduction to Myers-Briggs Type, 7th Edition. The Myers-Briggs Company.

--

--