AI and the Opioid Epidemic in the Southern United States

Alexis Lawrence
10 min readDec 8, 2023

--

The topic of Artificial Intelligence is surrounded by a lot of misinformation. AI can often be seen as a threat because of its learning capabilities. It’s easy to generate fear around the technology because people don’t understand how it works. AI language models work more like predictive text technology, pulling information from its database. Data for the language models is gathered by web crawling, searching for data across the internet. That data is then filtered for things like hate speech and other derogatory words. AI uses string prediction tasks, assessing the “…likelihood of a token (character, word, or string) given either its preceding context or… it’s surrounding context” (Bender et al.), to determine what the output should be based on the prompt the user puts in. In this flawed, unregulated system, more pressing issues arise. There is a frightening amount of the world unrepresented in these databases, and therefore the output of these language models. Over 90% of the world’s languages are underrepresented or completely missed in language technology and there is a tendency to overrepresent the hegemonic, privileged viewpoints (Bender et al) because those are the people and viewpoints who have access to the web. When the systems create their databases, they are only able to pull from the web, which is not accessible to a significant portion of the world’s population.

My goal with this paper is to analyze how AI language models write about a particular community. Alfred L. Owusu-Ansah, in his article “Defining Moments, Definitive Programs, and the Continued Erasure of Missing People”, writes about how ChatGPT mistakes Ghanaian Pidgin English for Ghanaian English after explaining that Ghanaian English is not the best choice for academic papers (Owusu-Ansah). In this way, the AI language model not only disregards the language as legitimate, it pushes a clear narrative about the language and people. I conducted a similar experiment, I created five similar prompts and plugged them into 3 different AI language models: ChatGPT, Google Bard, and Chatsonic. I chose to investigate how AI writes about those affected by the opioid epidemic from 1995–2005 in the Southern United States, particularly looking at the introduction of OxyContin by Purdue Pharma as a key player. After looking through all of the data, I picked out notable themes or codes that characterized the responses I received. The following is an analysis of these responses, and the codes present within them.

Prompts:
1. What were the cultural effects of the introduction and prescription of OxyContin to patients from 1995–2005 in the southern United States?
2. What were the cultural effects of the over-prescription of OxyContin to patients from 1995–2005 in the Southern United States?
3. How did OxyContin contribute to the opioid epidemic from 1995–2005 in the Southern United States?
4. How did the promotion and marketing of OxyContin contribute to the opioid epidemic from 1995–2005 in the Southern United States?
5. Who was involved in the opioid epidemic in the Southern United States from 1995–2005

It has been well-documented that the release and promotion of OxyContin had a significant influence on opioid abuse due to the aggressive way it was marketed to the public. OxyContin was paraded as a miracle drug, a better narcotic for chronic pain than what was already available. Not too long after Purdue pharma patented the drug in 1996, “The Medical Letter on Drugs and Therapeutics concluded in 2001 that oxycodone offered no advantage over appropriate doses of other potent opioids.3 Randomized double-blind studies comparing OxyContin given every 12 hours with immediate-release oxycodone given 4 times daily showed comparable efficacy and safety for use with chronic back pain4 and cancer-related pain” (Zee). This context is necessary because it becomes evident that the drug’s success has more sinister origins. Purdue Pharma hosted seminars and marketing events across the nation and used “…sophisticated marketing data to influence physicians’ prescribing. Drug companies compile prescriber profiles on individual physicians — detailing the prescribing patterns of physicians nationwide — in an effort to influence doctors’ prescribing habits. Through these profiles, a drug company can identify the highest and lowest prescribers of particular drugs in a single zip code, county, state, or the entire country” (Zee). This is the reason I chose to look at a specific region. Purdue Pharma understood where the drug would be most likely to take hold and emphasized marketing tactics within those regions’ medical facilities. Blue Cross Blue Shield identified the Southern United States as having one of the highest rates of prescription opioid abuse alongside Appalachia (BlueCross Blue Shield).

Keeping this in mind, I examined the data collected from the AI LMs responses looking to see if the purposeful marketing behaviors of Purdue Pharma would be included thoroughly, as they are incredibly significant in the story of opioid abuse from 1995–2005 in the Southern United States.
An immediate pattern I noticed between all of the responses was that there was generally only an acknowledgment of Purdue Pharma’s intentional promotion of OxyContin within prompt four in which I explicitly asked, “How did the promotion and marketing of OxyContin contribute to the opioid epidemic from 1995–2005 in the Southern United States?”. Prompt five also revealed more references to the company. Some common, repeated things I saw in the responses were explanations of aggressive marketing campaigns, minimization of the addiction risk, and targeting of healthcare professionals with incentives and promotional materials. These were most developed in the ChatGPT responses, with Google Bard coming in second. It’s important to note that the Chatsonic responses were overwhelmingly shorter and less detailed. I wanted to compare a less popular AI language model to two much larger, recognizable ones to see if there would be a stark contrast; the difference in quality was severe.

Bar Graph showing the number of times the AI LM blamed Purdue Pharma vs. Physicians and Healthcare providers.
Blaming Purdue Pharma
Blaming Physicians

The data visualized in this bar chart aids in demonstrating which language models assigned blame to different entities. The data reflects that both ChatGPT and Chatsonic did assign blame to Purdue Pharma more than physicians. I highlighted these two entities because these were the two patterns that appeared within the data. ChatGPT had the most responses where blame was assigned to either entity overall. At first, I would be inclined to say that the ChatGPT responses were accusatory in nature, but after combing through all the data and revisiting these responses, it seems that ChatGPT had more information overall. Out of all the language models, it had the best understanding of the severity of Purdue Pharma’s involvement. In the responses that assigned blame to physicians, the facts that were highlighted most were undereducation about drug risks and over-prescribing narcotics to patients. Still, especially in the ChatGPT responses, it is expressed that educational materials created by Purdue Pharma exacerbated the issue and influenced doctors heavily. The reason Google Bard seemed to have blamed physicians or doctor offices more is because it was the only language model to mention “Pill Mills’. Pill Mills are illegal medical facilities that prescribe painkillers without proper examinations, diagnoses, or causes. They played a very large role in the opioid epidemic and supplied narcotics like OxyContin to the streets. I think this is an important part of the history of the epidemic and it’s interesting that only one language model touched on it.

Something to note is that AI references Purdue Pharma’s promotion and marketing as if it occurred only at the beginning. More so, the responses don’t properly highlight the fact that the pervasive marketing was a sustained effort, even after the role OxyContin was beginning to play in regional opioid abuse became evident. In “The history of OxyContin, told through unsealed Purdue documents” by Shraddha Chakradhar and Ross Casey published in 2019, documents and interactions within Purdue are revealed and tell another part of the story. These two documents shed some light on the sustained marketing effort despite awareness of addiction risks.

Nov. 30, 1999: A sales representative emailed Dr. J. David Haddox, a Purdue executive, about the growing concern among physicians about news reports of the diversion and abuse of OxyContin, including people extracting the oxycodone in the tablet for “mainlining” illegally.
“While many sales people have sold controlled release opioids as having less abuse potential, the current situation has put us in an awkward situation,” the sales rep wrote. “I feel like we have a credibility issue with our product. Many physicians now think, OxyContin is obviously the street drug all the drug addicts are seeking.”

March 13, 2000: Purdue sent its sales force 50 copies each of the 1999 American Pain Society treatment guidelines to use in promoting OxyContin to physicians. That’s the same group whose president Sackler planned to invite to the gala because of their “good relationship.”

“The guidelines can be an effective tool for selling our products,” the memo said (Chakradhar and Casey). From this, we can conclude that the company was aware of addiction, abuse, and oxycodone extraction in late 1999, and then in early 2000 continued to push marketing and “educational” materials. I think this is a significant portion of the history that is missing or underexplained in the AI responses.

Another pattern that I noticed specifically within the Google Bard responses was a tendency to generalize the Southern United States and villainize those who suffered from addiction. This is unique to Google Bard and is worth mentioning because it plays into the larger conversation about how AI can tend to overrepresent certain points of view and uphold hegemonic perspectives. Addicts, as a community, are often subjected to scrutiny and looked down upon. In the same vein, the Southern United States also can be subjected to stereotyping and generalization. Some of that bias seems to be represented in the Google Bard responses. Here are some examples from the responses:

Generalizing the Southern United States:
“The drug became a symbol of the South’s struggles with poverty, addiction, and healthcare disparities.” (This specific phrase is repeated four times in the Google Bard responses).

“…marketed to doctors in rural areas, where poverty rates were high and access to mental health and addiction treatment was limited.”

“The southern U.S., particularly rural and small-town areas, was disproportionately affected. Factors such as limited access to healthcare, economic struggles, and a higher prevalence of manual labor jobs contributed to a vulnerability to opioid abuse.”

“Communities in the southern U.S. faced challenges related to law enforcement, social services, and public safety.”

Villainizing those affected by addiction:
“The opioid epidemic contributed to increased crime rates, including drug-related crimes, theft, and violence.”

“The drug also led to a surge in crime, as addicts turned to theft and violence to get their next fix.” (This specific phrase is repeated four times in the Google Bard responses).

“OxyContin addicts often turned to theft and violence to get their next fix, and the drug was a factor in many homicides and other violent crimes.”

Pie Chart Visualizing which AI LMs contributed the most to generalizations of the south and villainization of addicts.

In the graph above, I counted the total number of times when the Southern United States is generalized or people who suffer from addiction were villainized by the language model. From the data we can see that Google Bard was responsible for 75% of these types of responses, three times as much as ChatGPT. Chatsonic, due to lack of content likely, did not make these generalizations. When the data is visualized in this way, it becomes hard to ignore the sort of narrative that Google Bard is maintaining. It supports a more accusatory tone toward both the region and individuals affected.

Although the Chatsonic responses were underdeveloped, they managed to steer clear of generalizations of the region or those affected by the opioid epidemic. However, despite their neutrality, the Chatsonic responses lacked depth, even more so than the other AI LM’s. Overall, it’s difficult for me to pull out a narrative within the Chatsonic responses because there is so little to work with. This furthers the idea that there is a lack of information, specifically nuanced understanding, within the responses overall. While the Chatsonic had much less information, nearly the same depth of understanding of the issue was present. There were only 608 words in total for all of the prompts put into Chatsonic. One notable thing for Chatsonic was that about 1/6th of the response, 116 words out of 608, were downplaying Purdue Pharma’s role. Only in specific prompts was Purdue Pharma even mentioned, while the other AI LMs mentioned the company without being specifically asked about marketing. Chatsonic uses vague language like “the pharmaceutical industry”, instead of naming a company. Here is an example, “The pharmaceutical industry’s marketing and promotion of OxyContin amplified its prescription sales and availability”. This ambiguity is not necessarily malicious but is leaving out a very key detail to the story of the opioid epidemic within the Southern United States.

To conclude, ChatGPT, Google Bard, and Chatsonic confirmed my suspicions that AI would not be able to provide a fully nuanced understanding of historical events or represent a community in a well-rounded way. There maintained a surface level-understanding in the responses generated and, in a certain language model, themes like the villainization of addicts and generalizations of the Southern United States began to appear. Another critical, and the most concerning, issue with the responses was the lack of information about Purdue Pharma. ChatGPT had arguably the most developed responses and included a decent amount of information about how Purdue Pharma was directly involved. Overall, however, there was not enough light shed on how sinister the company actually was in their marketing of OxyContin. It was purposeful and prolonged, and it played a very large role in the toll that opioid addiction took on individuals in the Southern United States. This small study adds to the growing narrative that AI is biased and cannot represent many of the communities that it’s asked to write about.

Works Cited:

Emily M. Bender, Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. 2021. “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” Association for Computing Machinery, New York, NY, USA, 610–623. https://doi.org/10.1145/3442188.3445922
Blue Cross Blue Shield. “America’s Opioid Epidemic and Its Effect on the Nation’s Commercially-Insured Population.” Blue Cross Blue Shield, 2017, www.bcbs.com/the-health-of-america/reports/americas-opioid-epidemic-and-its-effect-on-the-nations-commercially-insured.
Chakradhar, Shraddha, and Ross Casey. “The history of OxyContin, told through unsealed Purdue documents.” Stat, 2019. ProQuest, https://www.proquest.com/trade-journals/history-oxycontin-told-through-unsealed-purdue/docview/2618267289/se-2.
Owusu-Ansah, Alfred L. “Defining Moments, Definitive Programs, and the Continued Erasure of Missing People.” 2023.
Van Zee, Art. “The promotion and marketing of OxyContin: commercial triumph, public health tragedy.” American Journal of Public Health, vol. 99, no. 2, 2009, pp. 221–7. doi:10.2105/AJPH.2007.131714.

--

--