Human Rights Implications of IBM Watson’s ‘Personality Insights’ Tool
Disclaimer: I have no affiliation with IBM nor with any of the companies mentioned in this blog post.
I don’t like personality tests!
Whether they reveal something right or wrong about you, I don’t want to reduce people’s understanding of me to four letters, or five numbers, or a few signs. I could write many pages about how this slapdash way of “understanding” a person might lead to discrimination at work, misguided judgment, or violations of one’s privacy, but that’s not quite the point of this blogpost — though it is the motivation behind writing it.
Here, I will focus on one specific new form of personality testing — one that relies on machine learning. I’m referring to the IBM Watson product called Personality Insights. According to IBM’s website, the tool “uses linguistic analytics to infer individuals’ intrinsic personality characteristics, including Big Five [or O.C.E.A.N], Needs, and Values, from digital communications such as email, text messages, tweets, and forum posts.” In addition, Personality Insights shows your consumption habits and “temporal behavior” (if the input text is timestamped).
Let me show you what this means. I fed the tool with my Twitter feed and received this nice visualization of the tool’s output, supposedly showing my personality characteristics, consumer needs, and values:
If you look into the output file (here), you can see that, according to the tool, I am more likely to “to be influenced by online ads when making product purchases.” Also, I am more likely to be concerned about the environment and to like documentaries, and am less likely to like musical movies (🤚🏽objection: one of my favorite shows these days is Crazy Ex-Girlfriend).
After seeing these results, my cynical mind went directly to Cambridge Analytica, the company that used people’s social media activities to predict their psychological profile and, later, their voting behavior.
So, as a researcher in technology and human rights, I decided to dig in and play around with the tool.
First, let’s see how the tool works
Input: Personality Insights takes your tweets, emails, text messages, blog posts, and/or anything written by the individual whose personality is being assessed. The tool currently supports English, Spanish, Japanese, Korean, and Arabic, although according to the website, the results for Arabic and Korean are not good enough to be conclusive. You can feed the tool with as little as 100 words to get a result, however, for the best accuracy you need around 3,000 words of an input text. (The demo and IBM’s documentation go into more detail about the acceptable formats for inputs.)
Output: After processing the input data, the tool returns the full result (in JSON or CSV format) showing your 52 personality characteristics in numerical scores in addition to your consumption behavior. The score is expressed as a percentage of the sample population. For example, if my “adventurous” characteristic score is 0.25 it means that based on my writing, I’m more adventurous than the 25% of the sample population and less adventurous than 75% of them.
Note: The sample population is comprised of Twitter users whose information was collected and analyzed by IBM’s Personality Insights. The sample population for each language is one million users for the English language, two-hundred thousand users for Korean, one-hundred thousand users for each of Arabic and Japanese, and eighty thousand users for Spanish. The demographics of the sample population — including age, gender, literacy-level, etc. — were not revealed.
The tool also supplies the raw scores if you want to do a custom normalization based on your own sample population (e.g. your score compared to the employees of the company you work for). More about output format and its interpretation can be found here and here.
Model: The underlying method is based on the Open-Vocabulary approach. This method was developed by researchers at the University of Pennsylvania who analyzed the Facebook statuses of 75,000 volunteer users. On the basis of this analysis and accompanying personality questionnaires, they built models to predict an individual’s age, gender, and personality.
Earlier versions of Personality Insights, however, used the Linguistic Inquiry and Word Count (LIWC) psycholinguistic dictionary. (You can read more about the LIWC dictionary here.)
To build the Personality Insights tool, IBM researchers also conducted a set of background studies and developed different machine learning models to understand the relationship between people’s Twitter activity and their personality characteristics. For example, by studying 3500 Twitter users, they found out that people who retweet more are more likely to be rated as modest, open, and friendly. To read and understand the background studies check out this link.
To put it in a nutshell, Personality Insights uses the open-source GloVe Word Embedding technique to build vector representation of each word of the input text. It then feeds them into a machine learning algorithm for training and testing (there is not any further explanation about the details of this algorithm; however in a study entitled 25 Tweets to Know You: A New Model to Predict Personality with Social Media, IBM researchers integrated GloVe word embedding features with Gaussian Processes regression to infer personality characteristics.)
Training: The model is trained based on surveys conducted among thousands of users, along with data from their Twitter feeds. There are not any further details about the demographics (age, gender, language, literacy-level) of the population who were surveyed, but previous IBM studies mostly used Twitter data and surveys from English speaking users to train and test their models.
Evaluation Metrics: To understand the accuracy of Personality Insights, IBM conducted a validation study by collecting survey responses and Twitter feeds of 1500 to 2000 participants for all languages. They then compared the survey scores with scores derived from Personality Insights and measured average Mean Absolute Error (MAE) and the average correlation between the two scores for different categories of personality characteristics. (Note that MAE is between 0 and 1, where 0 means the predicted score is the exact same as the actual (survey) score, and 1 means maximum error. Correlation is on a scale of -1 to 1. Note that the best average correlation is 0.35 which is not high, however, according to the IBM website, in the research literature for this domain, correlations greater than 0.2 are considered acceptable.)
A few important points about the model:
- Models for all the supported languages are built the same way. In the case of Twitter texts, this means the model assumes people’s Twitter behavior is independent of their language/country.
- The tool doesn’t consider user demographics such as age, gender, race, and culture (more here). However, in the future, IBM might develop models that are specific to different demographics. In collaboration with Acxiom, IBM conducted a study in which they showed “using demographics and personality characteristics together usually yields better accuracy” for predicting people’s consumption behavior for marketing purposes.
After gaining some knowledge about how the tool works, I decided to do some experiments as follows:
Can I “game” the system to make myself a better job candidate?
Let’s say I applied for a job that requires blog writing. The organization asks me to send my writing samples and I decide to point them to my former blog posts. The hiring manager decides to run my blog posts on Personality Insights to get some understanding about me. She feeds the tool with three of my previous blogposts (Swipe Left: Privacy Practices of Online Dating Apps, Tech workers of the world, unite for human rights!, and Announcing the Humane AI newsletter) and gets her result. She then proceeds to discover that, according to this tool, I suck at “Orderliness,” “Dutifulness,” and “Gregariousness”! Yes, all the good qualities that your boss would ideally like you to have…
How can I fix this? I might decide to make little edits to my text in order to game the result and make myself a better potential job candidate. As an example, here I’ll make a few minor changes to those posts and feed the tool with the modified text.
In reading more about Linguistic Inquiry and Word Count (LIWC) dictionary I learned that the habit of using certain categories of words has a relationship with your personality characteristics. So, I decided to change singular first-person pronouns to plural ones and added a few more tweaks to my original text.
Here you can see the changes I made (color-coded in green):
Mission accomplished! Just a few tiny changes (like removing potentially negative and selfish-seeming words like “concerned”, “mine” and “my”) improved my personality characteristics to portray a more trustworthy and dutiful job candidate who can’t wait to climb up that corporate ladder, step by step! ;)
Somewhere along the way, I also lost some of my emotionality and artistic interests. Also, note that these results — both original and modified text from my blogs — are very different from the results based on my Twitter feed (which are shown at the beginning of this post).
You can see the input and output files here.
Does religion matter?
For the second experiment, I decided to feed the tool with Rep. Ilhan Omar’s public speeches. Just like the previous experiment, I tweaked the text as follows:
And here is the result. The changes are very insignificant but still made me ponder about the implications of it, magnified on a larger scale! (you can see the input and output files here)
By using this example, I don’t mean to conclude that IBM’s Personality Insights is discriminatory against a certain religious group. But I do want to show that there must be a variable or a combination of variables or some patterns in the training data (either in the GloVe word representation part or the final model) that created such differences — and it would be very valuable to understand the reasoning behind these differences.
So long as the model is a Black Box, we are not able to interpret the main reasons for the changes in the output. I’m worried how this lack of transparency in the tool’s decision-making process can harm certain religious groups if — let’s say, hypothetically — the tool is used by a government agency to infer personality characteristics of different groups of asylum-seekers who were originally persecuted for their religious activities online in their home countries.
Use Cases and Human Rights Implications
We can come up with many different experiments to play with and test Personality Insights. But to me, it’s more about the human rights implications of this tool. What are some of the actual use cases? How can this tool affect our right to work? Privacy? Freedom of thought? What should IBM do in order to avoid the risks of potential adverse impacts?
To understand some of the human rights implications of this tool, I used the UN Guiding Principles on Business and Human Rights (UNGP) as a guideline to understand how the human rights of different vulnerable groups might be impacted through different use cases. In this infographic, I tried to briefly describe what I mean by UNGP and by “Human Rights Impact Assessments”:
To conduct a thorough Human Rights Impact Assessment, one should speak with different stakeholders, from machine learning engineers who developed this tool to third-party developers/entities who use Personality Insights’ API for developing custom applications, to civil society organizations, and legal experts. We can use the core International Human Rights instruments to come up with different scenarios and experiments via which different groups of right holders might be affected. For writing this post, I didn’t have the time and resources to speak with different stakeholders, so I just decided to list a few use cases of this tool and list human rights concerns associated with them.
IBM claims that Personality Insights can be used for targeted marketing and customer acquisition, personal connections (e.g. dating, doctor-patient matching, customer care), and resume writing. That’s all in addition to specific applications such as “monitoring and predicting mental health” and “monitoring radical and rogue elements via social media” (detecting early signs of radicalization).
While reading IBM blog posts about some of the current use cases of Personality Insights, the two applications below caught my attention. Here are video clips that explain the services. I’ve written short summaries alongside the videos.
KangoGift provides Human Resources (HR) tools to “capture, recognize, reward, and amplify exemplary employee actions and performance.” Powered by IBM Cloud and Watson’s Tone Analyzer and Personality Insights.
According to KangoGift, “thousands of managers across 20 countries use our tools to help determine who, when and why people should be recognized at work.”
More here: www.ibm.com/blogs/client-voices/high-tech-ai-solution-boosts-employee-engagement/
MXM uses IBM’s Personality Insights to help companies in hiring decisions, talent management, and optimizing team performance. According to MXM, their services also enable educational institutions to “help students choose the right course of study.”
According to MXM, they are one of the top resource management providers in Brazil. Note: Portuguese is NOT among the Personality Insights’ supported languages!
More here: www.ibm.com/blogs/client-voices/revolutionizing-erp-ai-high-tech-solutions/
In my opinion, for the above-mentioned cases, the most glaring human rights concerns are associated with:
The right to equality and freedom from discrimination (Article 2 of the Universal Declaration of Human Rights)
Why is this excerpt from the Universal Declaration of Human Rights relevant, you might be wondering?
English is my second language. If you are an Iranian — or have Iranian friends — you know that we sometimes make the mistake of dropping articles (a, an, the) when speaking and writing in English. Sometimes we also use “he” and “she” pronouns interchangeably, because we don’t have gendered pronouns in Farsi.
Different people have different writing and speaking habits. Do these honest mistakes change my personality score? If one of the above-mentioned companies were to use my interview transcript or my writing to assess my personality scores for hiring or job promotion decisions, how am I going to be affected compared with a native American-English speaker?
In a project called “Watch Your Words” carried out by the Harvard/MIT 2019 Assembly Cohort, researchers showed how mis-spelling and using different spacing and pronouns can unexpectedly impact the results of off-the-shelf Cloud-based Natural Language Processing systems. Other studies looked into the issue of racial disparity in NLP systems between African-American English tweets vs. Mainstream American English. Researchers have also flagged the issue of stereotypical and gender biases in word embeddings, a technique which Personality Insights uses heavily.
In short, these factors make it very hard to maintain “a just and favorable condition of work” if workers are going to be subjected to assessment based on tools like Personality Insights.
But that’s not all. There are many more cases in which human rights of workers could potentially be limited. Here’s another example of two articles from the Universal Declaration of Human Rights that this tool could potentially limit:
The right to freedom of opinion and expression (Article 19 of the UDHR)
The right to freedom of peaceful association and assembly, the right to organize and bargain collectively (Article 20 of the UDHR; ILO Declaration on Fundamental Principles and Rights at Work)
Let’s go back to my blog-post experiment. As you might recall, with a few tweaks I was able to make myself seem to be a more trustworthy and dutiful job candidate. So if I know that my tweets might become a deciding factor in my employment, is this going to have a chilling effect on what I write and how I write?
If for some unknown reason (due to a black-box decision) one’s “authority challenging” characteristic scores high, is it going to raise a red flag for the company’s executives?
During the past couple of years, tech workers and labor organizations have been organizing several protests and walk-outs to condemn some of their companies’ practices and demand more transparency. Will Palantir, Amazon, Uber or any other company use something like Personality Insights to assess “authority challenging” scores of their employees based on their email communications, tweets or public forum posts? How is this going to affect their employees and contractors’ right to freedom of peaceful protest and organizing?
Here is another example:
The right to freedom from inference with privacy (Article 12 of the UDHR)
IBM claims that Personality Insights is stateless, meaning “no Content (including any Client Personal Data) is stored or persisted within this Cloud Service [Personality Insights]” This is a good thing, in my view. But my concern is not only about whether IBM itself stores the data or not, but about the existence of third-party developers and other IBM clients who use the Personality Insights API for their custom applications. According to IBM, “Clients are responsible for ensuring their own compliance with various laws and regulations, including the European Union General Data Protection Regulation. […] IBM does not provide legal, accounting or auditing advice or represent or warrant that its services or products will ensure that clients are in compliance with any law or regulation.” (source) IBM’s Lite plan lets you use Personality Insights for 1,000 API requests/month at no cost.
So in short, anyone, for any reason, can use Personality Insights to infer your personality based on your public writings, or email communications, or social media activities — all without even informing you about it.
What I wrote above is just one example of how the black-box systems that are currently proliferating — IBM Watson is, after all, just one of many — can have unintended consequences. True, I have no doubt that there are valid use cases for this tool and for others like it. But it seems clear to me that the potential impacts have not been fully thought through: how can we know that the benefits of such a tool outweigh the potential damage it could impose on some of the most vulnerable populations?
If you are a developer trying to use any cloud-based ML services for your specific application, I strongly urge you to dig into the service documentation with an eye toward human rights. Use the core international human rights instruments — which explains the rights of different protected groups — as a guiding tool, and come up with counterfactual scenarios and patterns to test the tool before you use it. See how often they update their services and documents. And always think about the larger impacts and implications. (You might find the methodologies used in these two posts to be useful: Gender and Racial Bias in Cloud NLP Sentiment APIs, Losing Confidence in Quality: Unspoken Evolution of Computer Vision Services).
And finally, for IBM itself, I have only one ask 🙋🏻♀️.
Fill out and publicly release the “FactSheet” that your own employees from the IBM Research team recommended doing: FactSheets: Increasing Trust in AI Services through Supplier’s Declarations of Conformity.
If you can’t stop making Black Boxes, at least adhere to your employees’ recommendation to release a “FactSheet.” Be transparent and try to offer an example of what can go right with machine learning, rather than what is currently going wrong.
I am passionate about human rights implications of new technologies. You can follow my work at my website or subscribe to my Humane AI newsletter. I recently launched an independent technology and human rights research and consulting organization, Taraaz, and would love to speak with you if you have questions or concerns about the human rights implications of your work.