Preserving Social Connections against the Backdrop of Generative AI

Considerations and Questions

Alexa Hasse
Berkman Klein Center Collection
7 min readNov 9, 2023

--

Social connection is a fundamental human need. From both a developmental and evolutionary standpoint, nurturing relationships matter. Our social connections with others can help support our basic needs for survival, provide a source of resilience, and enable us to gain a sense of belonging and mattering in our social and cultural world.

The U.S. Surgeon General recently released a report on an “epidemic of loneliness,” suggesting that a lack of social connection poses major threats to individual and societal health. As noted in the report, the mortality impact of feeling disconnected from others is similar to that of smoking 15 cigarettes every day. Research also indicates that loneliness increases the risk of both anxiety and depression among children and adolescents and that such risks continued to exist nine years after loneliness was initially measured. Conversely, social connection can enhance individual-level physical and mental well-being, academic achievement and attainment, work satisfaction and performance, and community-level economic prosperity and safety.

Colorful silhouettes of diverse individuals are arranged suggestively, some alone, some in groups, on a beige background.
Cover illustration from Our Epidemic of Loneliness and Isolation: The Surgeon General’s Advisory on the Healing Effects of Social Connection and Community (2023).

Over the past year, there has been a rising interest in, and media coverage of, generative AI or what linguist Dr. Emily Bender terms “synthetic media machines” — that is, systems by which one can generate images or, as with large language models (LLMs), “plausible-sounding” text. Despite the hype, these systems are not completely new. The 1940s marked initial forays into language models. What is new is how these systems — which are “more ‘auto-complete’ than ‘search engine’” — are being promoted: they are being made available to the broader public.

How do different users perceive these systems? Preliminary research from IDEO sought out the perspectives of twelve participants ages 13 to 21 in the U.S. around the ways generative AI may impact social connection (among other themes). The company first distilled key sentiments associated with these systems based on large quantities of social media posts and then presented participants with AI-driven hypothetical products, such as “Build a FrAInd: Your ideal bestie come to life, based on celebs and influencers you love” and “New AI, New Me: An avatar trained on your preferences that has experiences for you.” Participants had varying levels of familiarity with generative AI and diverse life experiences (e.g., some participants were in school and others not, some had international backgrounds, etc.). When asked for their thoughts on these products, they emphasized that relationships are all “about you learning as you go,” that humans must “remain at the helm.”

In IDEO’s youth-focused research, respondents also voiced concern around trust.

In the context of human-to-human connection, an important question arises: How will generative AI, such as LLMs, influence the trust we have in other people?

A study from a Stanford and Cornell research team demonstrated that when asked to discern whether online dating, professional, and lodging profiles were generated by an LLM or a human, participants only selected the correct answer about half of the time. Whereas participants could sometimes identify specific markers of text generated by LLMs (i.e. synthetic text) such as repetitive wording, they also pointed to cues such as grammatical mistakes or long words, which, in the study’s data, were more representative of language written by a human. Additional features that participants used to discern human-written text, including first-person pronouns or references to family, were equally present in both synthetic and human-written profiles. Rather than interpreting results as evidence of machine “intelligence,” the Cornell and Stanford team suggested that individuals may use flawed heuristics to detect synthetic text.

The authors proposed that such heuristics may be indicative of human vulnerability: “People are unprepared for their encounters with language-generating AI technologies, and the heuristics developed through . . . social contexts are dysfunctional when applied to . . . AI language systems.” Concerningly, individuals are more likely to share personal information and follow recommendations by nonhuman entities that they view as “human,” raising key privacy questions. At the same time — at least in the short term — they may begin to distrust those who they think are using synthetic text in their communication.

Issues of bias are also central given that systems such as LLMs absorb and amplify the biases in training data. Against the backdrop of the race towards ever larger LLMs, as outlined in Bender and colleagues’ ground-breaking paper “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?,” the wider web is not representative of the ways that different people view the world. A number of factors impact 1) who has access to the Internet, 2) who feels comfortable sharing their thoughts and worldviews online, 3) who is represented in the parts of the Internet chosen for the training data, and 4) how the basic filtering applied to training data produces more distortion.

For instance, per the second factor, whereas user-generated content sites (e.g., Reddit) portray themselves as welcoming platforms, structural elements (e.g., moderation practices) may make these sites less accessible to underrepresented communities. Harassment on X (formerly Twitter), for example, is experienced by “a wide range of overlapping groups including domestic abuse victims, sex workers, trans people, queer people, immigrants, medical patients (by their providers), neurodivergent people, and visibly or vocally disabled people.” As the authors of “Stochastic Parrots” point out, there are selected subgroups that can more easily contribute data, which produces a systemic pattern that exacerbates inclusion and diversity. In turn, this pattern initiates and perpetuates a feedback loop that diminishes the impact of data from underrepresented communities and privileges hegemonic viewpoints.

Automated facial recognition software is another example. Before the widespread use of generative AI, Dr. Joy Buolamwini and Dr. Timnit Gebru found that popular facial recognition systems exhibited intersectional biases: the systems performed significantly worse on individuals of color and, in particular, on women of color. Biases in AI systems have major real-world harms across areas like employment, law enforcement, and education. As more synthetic media is produced, such content is then fed back into future systems, creating a pernicious cycle and perpetuating biases connected to, as a few examples, race, class, and gender.

In practical terms, what might considerations like these mean for human-to-human connection?

Let’s imagine you are a parent emailing your child’s school counselor to begin a conversation about a behavioral challenge your child is experiencing. You receive a response, but wonder: Was part of this email produced by ChatGPT? If so, which part(s)? Why would the system be used to respond to such a sensitive concern? What might that indicate about the counselor? Perhaps about the school as a whole? Would you fully trust the counselor to assist in the referral of your child?

Furthermore, what if you knew about the significant biases built into and amplified by generative AI? Or about other ongoing harms connected to these systems, such as labor force exploitation, environmental costs that exacerbate environmental racism, and massive data theft? Would this knowledge further erode your trust in communicating with someone whom you suspect may have responded with synthetic text, and, if so, to what degree? Whereas trust may not be the ultimate end goal of human communication, it is still a vital part and outcome of a positive, healthy connection.

There are a number of key questions moving forward. How can we counter the generative AI hype and educate individuals to be critical consumers of these systems — with the understanding that, as Dr. Rumman Chowdhury has pointed out, AI “is not inherently neutral, trustworthy, nor beneficial”? While acknowledging this nuanced landscape, how do we develop regulations that emphasize accountability on the part of the companies that develop and deploy generative AI (especially through a lens of algorithmic justice as described by Deborah Raji); transparency (e.g., the knowledge that one has encountered synthetic media and an understanding of how the system was trained; e.g., “consentful tech”); and the prevention of exploitative labor?

Returning to social connection and human-to-human communication, when we use language, we do so for a given purpose — to ask another person a question, explain an idea to someone, or just to socialize. In the context of LLMs, it is important not to conflate word form and meaning. Referents, actual things and ideas in the world around us, like tulips or compassion, are needed to produce meaning. This meaning is unable to be learned from form alone. Given that LLMs are trained on form, these systems do not necessarily learn “meaning,” but instead some “reflection of meaning into the linguistic form.” As Dr. Bender notes, language is relational by its very nature.

Moving forward, it is essential that we preserve the sanctity of genuine human-to-human connection, with its conflicts, its awkwardness, and its spaces for cultivating relationships built on consistent trust, belonging, and mattering to those in one’s life.

Are you interested in continuing the conversation around social connection? Please fill out the following form! In addition, would you recommend resources that should be included in this piece? Other feedback? Please feel free to reach out to me at any time (alexandra.hasse2556@gmail.com); I am still learning in this space and I so much value learning from you.

This essay is part of the Co-Designing Generative Futures series, a collection of multidisciplinary and transnational reflections and speculations about the transformative shifts brought on by generative artificial intelligence. These articles are authored by members of the Berkman Klein Center community and expand on discussions that began at the Co-Designing Generative Futures conference in May 2023. All opinions expressed are solely those of the author.

--

--