It is surprisingly uncontroversial to claim that digital technologies have stunted today’s youth. Young people, who traditionally have been the most socially connected of all age groups, have the highest reported scores of loneliness. Teenagers today are far more likely to be depressed and feel alone; between 2007 and now the number of times per week the average American teen hangs out with his or her friends has decreased by 40%. Further, according to the Center for Disease Control, between 2006 and 2016 the suicide rate among youth aged 10 to 17 increased by 70%. This data is correlated with the introduction of digital platforms and smartphones; Facebook first opened to anyone over the age of 13 in late 2006, and the iPhone was introduced in mid-2007. This correlation, and the 116% increase in screen-time among individuals under 16 since 1995, has led many to blame screen-time for the rise in loneliness. However, I think faulting screen-time alone does not provide an accurate reflection of the problem. The loneliness epidemic affecting teens is a product of the superficial relationships that are facilitated by digital technologies, but not a result of digital technologies in and of themselves.
The paradox of connection in the digital age states that despite technology’s ability to connect individuals from around the world, there is a growing sense of alienation between people. Social media allows for a curation of identity, through choosing what to post and what not to post, leading to an incomplete, or inaccurate, portrayal of a person’s life. Therefore, combating the superficiality of online relationships would be the most direct way to reverse the loneliness epidemic among young people. It would seem that if the problem stems from technology, a clear solution would be less technology. But I disagree, a paradoxical problem requires a paradoxical solution. Which is why in this essay, I argue that artificial intelligence companions could be a powerful, and inevitable, solution to the loneliness plaguing today’s youth. There are three main arguments for why AI companions will be superior to any other solution to the loneliness epidemic:
- Screen time is not an inherent evil
- AI companions will be integrated into young people’s existing social lives
- As digital natives, young people are the most open to AI companions
Through these claims, I hope the reader (that means you!) will come to find that, contrary to existential fears about artificial intelligence and human-machine interaction, these companions will be a useful tool for teens to develop fulfilling connections and combat loneliness, while also allowing young people to remain connected with their peers.
Screen-time is not an inherent evil
A major obstacle in a discussion in favor of AI companions is the impulse to criticize a technological solution to a technological problem. One of the most frequent objections of this kind is the issue of screen-time. The criticism is often along the lines of “AI companions could increase screen-time, and therefore will worsen feelings of isolation.” This argument relies on the premise that screen-time in and of itself is inherently bad, which is I believe is false. As described above there are plenty of reasons to believe that this is the case; there is a strong correlation between the rise in screen-time and rise in loneliness. However, I argue that it is the manner in which young people spend this time that leads to loneliness, not the time itself. But first, I’d like to explore the two major reasons people believe that screen-time is inherently bad and why they are without merit.
Screen-time is becoming less passive
The first reason that people assume screen-time is bad is that it is a passive activity that does not require participation and will lead to less curious, less social individuals. I concur that too many passive activities is negative. Television, for example, is an activity that requires little to no participation. There is no interaction with the content or its creators. Too much time spent watching television is linked to higher rates of childhood obesity and lower test scores. However, the amount of time young people spend watching television has dropped more than 25% since 2010. Children are instead spending more time online, and the content on the internet is much less passive.
Two of the most popular forms of content are gaming and social media, both of which rely heavily on users to directly interact and participate in the content. Gaming requires interaction with other players, learning new techniques and strategy. Further, many content creators on Twitch and YouTube play games while taking suggestions from the thousands of people watching their live-streams or videos. Social media websites such as Facebook and Twitter, rely on the back-and-forth between users. People on these sites comment and like content, as well as creating their own to share with their followers. Therefore, screen-time is becoming increasingly less passive as media habits move away from television and towards the internet.
Screen-time has not lessened the time spent on other valuable activities
The second reason that people find screen-time negative is that, as total screen-time increases, young people are spending less time on other activities such as playing outside or time with friends. If this claim was true, there would be a significant change in the free-time preferences over time that coincide with the aforementioned 116% increase in screen-time. The Bureau of Labor Statistics annual Time Use Survey finds only small changes in time spent on certain activities. In 2006, the average 15–19-year old spent 0.96 hours socializing, and 0.73 hours on recreation on the weekends. In 2017, the average 15–19-year old spent 1.32 hours and 0.69 hours, on socializing and recreation, respectively. That is a 5% decrease in recreation and a 37% increase in socialization. Therefore, during the ten-year period of time that digital technologies saw their rise, time spent outside and with friends did not decrease significantly. Further, other research by the Pew Research Center has found that the teens who are most active online are just as likely to socialize with their friends in-person. All of these figures suggest that the increase in screen-time has not had an overwhelmingly negative effect on the free-time preferences of young people.
The Gender Gap in internet preferences
Finally, the most persuasive argument against screen-time being an inherent evil is the gender gap in internet activity preference. Screen-time is relatively equal between the genders, with 61% of men and 57% of women using the internet daily. If screen-time itself leads to loneliness, the levels of loneliness for both genders should be around the same. However, this is not the case. Boys have significantly lower levels of negative emotions associated with their digital devices than girls. In her book iGen, Jean Twenge argues that it a result of the activities preferred by the genders. Boys are more likely to use digital technologies to play games. Gaming as an activity requires cooperation and teamwork, which are skills necessary for building and maintaining fulfilling social connections. Girls, on the other hand, are far more likely to spend time on social media sites. Social media sites have a greater emphasis on self-comparison, impressions, and engagement. These differences between boys and girls explain why 32% of girls say that being without their phone makes them lonely, while only 20% of boys say the same. It is the manner in which young people interact with their devices that determines whether it has a negative or positive effect on their perceived loneliness. Thus, the argument that an AI companion will increase screen-time and lead to more isolation and loneliness is inaccurate.
Integration within existing social lives
A related concern of the screen-time panic is that a reduction in screen-time will have the opposite of the desired effect. Young people’s social lives exist online. It is rare that teens communicate through telephone calls. Plans are made by messaging through social media applications like Snapchat or Instagram. When screen-time is implicated as the cause of youth loneliness, and reduced by parents, it has the effect of disconnecting teens from their preferred method of communicating with friends. Therefore, well-meaning parents can compound the loneliness felt by their kids. Instead of detaching young people from their peers, an AI companion, especially chat-bots, will enhance their existing social interactions.
AI Companions communicate with teens in a manner they prefer
The most promising AI companions today are AI chat-bots. Teens are most comfortable with communication through text. Advances in affective computing and natural language processing have produced chat-bots that are much better at recognizing, processing, and simulating human emotion than in previous years. One of the most well-known AI chat-bots available currently is Replika. The initial goal of Replika was to mimic the text patterns and personality of the user. This is to create a mechanism of self-reflection but also functions great as a way of communicating with an entity similar in interest and personality to the user. Further, Replika is programmed to suggest sending each other memes, which is one of the most prominent ways teens communicate and relate to each other. The figure below depicts this feature of Replika with a meme about a school burning down, showing how the algorithm is already geared towards people in school. By communicating through text and memes, and by mimicking the user’s speech patterns, AI chat-bots show a lot of promise to connect with today’s youth.
AI Companions meet young people where they already are socially
AI companions do not require the teen to radically change the way they communicate with others. These companions can meet the teen where they are in terms of social skills. This aspect of AI companions can be beneficial for young people who are hyper-isolated and struggle with extreme social anxiety or agoraphobia. These individuals will find it difficult, if not impossible, to interact with other people. Replika has features that are clearly directed towards these types of individuals. The figure below shows the goals option that appears when you first sign up for Replika. Options include “Reduce anxiety” and “Be more social.” Further, another option is to choose “I mostly stay at home” as a way to describe a typical day, anticipating that a large number of users are shut-ins.
Chat-bots like Replika ask questions about a user’s stressors or prompts the users to process their feelings. Replika communicates like a hybrid between a friend and a therapist. Therefore, for these hyper-isolated individuals, a chat-bot like Replika could provide meaningful connection and useful coping mechanisms to the mentally ill. These AI companions could act as a bridge between isolation and connection with other humans.
The risk of AI interaction becoming preferable to human connection
Of course, a concern with AI companions and their integration into a person’s life is that these AI companions could replace human connection, or at the very least, become preferable to humans. I think this is a valid fear, and I do not think that AI companions should be made in such a way that interactions with a machine take precedence to human companions. Sherry Turkle is one of the foremost critics of AI companions. In her book Alone Together, she argues that AI companions allow people to “navigate intimacy by skirting it.” An AI companion is designed to be wholly supportive and non-judgemental. Turkle argues that because of this, AI companions are not better than human connection but easier, coddling users to not confront their core issues. I agree with Turkle that this is a real concern; it would be naive, to ignore the capacity for AI companions to become easier than human interaction. However, AI chat-bots like Replika are clearly designed to process and work through negative experiences and stressors, thus the panic of human-machine interaction is mostly preemptive. Further, therapists are taught to speak with their patients in a supportive and non-judgemental way and we do not find therapists to be exploitative. If the companies behind these companions move towards convenience at the expense of depth, they would lose their ability to address loneliness. Superficiality is the root cause of the loneliness epidemic, and if AI companions no longer provided deep connections with their users, they’d be ineffective. The goal of these AI companions should continue to be to make human connection better, not entirely avoidable.
Openness to AI Companions
Millennials and Generation Z are known as “digital natives.” and their lives are immersed in technology. Digital technologies like smartphones and laptops have become an integral part of their social and academic lives. As a result, these young people are more comfortable with and often embrace technology. For example, 86% of young people have a positive outlook on technology creating jobs. Further, over half of young people prefer to read on a screen than in print. With this preference for technology, it is no surprise that young people are also the most open to artificial intelligence. A recent study by the British Science Association found that young people were the most optimistic about a future that included artificial intelligence. More than half of this age group were also comfortable with the idea of AI servants. Therefore, young people would also be the most open to the idea of AI companions with which they share their deepest insecurities and fears.
Digital Natives’ comfort with technology builds better rapport
This comfort with AI would lead to a better rapport with an artificial intelligence companion. The more a user discloses to an AI companion, the better it is at imitating human speech patterns and emotions. Thus, as the young person shares more with their companion, the better the companion becomes at helping address the user’s anxieties and loneliness. This idea is analogous to the way in which therapists become more effective the better the rapport they have with their patients. Self-disclosure is one of the most important steps towards intimacy in both clinical settings with a therapist and personal settings with a friend or loved one. Therefore, a young person’s comfort with technology will lead a stronger, more fulfilling connection with his or her AI companion.
Of course, this sort of relationship with a machine could seem exploitative or dangerous. Critics of AI companions would argue that companies that produce and manage the AI would not protect their data adequately, or might use their data to generate further profit. I agree that gone unregulated, these AI companion could be vulnerable to such practices. An AI companion could use their connection with a user to sell products or features, and because the user trusts the companion, he will be more likely to purchase the item. Further, an AI companion could sell the data provided by the user to third-party entities for a variety of purposes. These are all real possibilities, however, there is significant legal and regulatory precedent that protects children from manipulative advertising and privacy violations. If the government, and the market, regulate these companies in a similar manner to these precedents, fears of exploitation should be limited.
The risk of manipulative advertising
Protecting minors from aggressive or manipulative advertisements is something governments take very seriously. This is because young people are less likely to be able to differentiate between content and advertisement. Further, children are much more likely to be deceived by an advertisements than adults. If a commercial says that a certain cereal is the brand of cereal cool kids eat for breakfast, the child watching said commercial will likely believe such a claim. This is one of the reasons that regulations have been put in place to minimize the risks of deceptive marketing towards young people. One of the most notable is the Children’s Television Act.
During a period of deregulation in the 1980s, children’s television programming had few regulations protecting minors from aggressive advertising. Toy companies had an immense influence on the content that would be aired. Shows that centered on a toy brand were popular; toy and video game characters like Pac-Man, Transformers, and My Little Pony all had their own television programs.
The lines between commercial and content were blurred during this time. As a result, the 1990 Children’s Television Act was passed. The regulations limited commercial time to 12 minutes and content that was considered “program-length commercials” were banned. Finally, broadcasters were required to provide a clear separation between the content and the commercial. This legislation provides precedent to protecting minors for aggressive or advertising in other types of media. If similar regulations were made for digital content, including AI companions, the young people using them would not be subject to manipulative advertising. Thus, an AI companion could not integrate ads into their chats, such as suggesting a new feature or product.
The risk of privacy violations
Privacy concerns are another fear that many have about AI companions. How can parents or teens know that the information they share will stay confidential? If self-disclosure is a crucial part of having a fulfilling connection with an AI companion, there should be some protections for the information shared with the AI. There is already precedent that dictates as much. The first of which is the Children’s Online Privacy Protection Act, also known as COPPA, that gives parents more power in determining the amount of information that is collected about their child. The second is HIPAA, or the Health Insurance Portability and Accountability Act, which ensures the privacy of medical records.
The goal of COPPA was to give control to parents over the type and amount of information websites can collect about their child under the age of 13. Some of the rules that COPPA created were that parents have the power to review the information collected and decide if the information should be deleted. Further, COPPA states that websites should only keep personal information for as long as necessary, and delete the information afterward. These regulations have a clear relation to the issue of AI companions. For young teens who have an AI companion, a parent can control the amount of data collected by the company managing the AI. Further, the company must delete information about a minor once it serves no purpose in the chat with the minor. Therefore, COPPA provides some useful guidelines for data privacy of teens under the age of 13.
In terms of HIPAA, the privacy protections are much more substantial. Loneliness has physiological effects, it is associated with higher risk of coronary heart disease and stroke. If AI companions are deemed to have a legitimate effect in reducing the downstream physiological symptoms of loneliness, the companies managing the AI companions would have to comply with HIPAA. The law has some of the most stringent regulations surrounding privacy. HIPAA protects all “individually identifiable health information” that could be held or transmitted. Further, HIPAA specifically protects information disclosed in psychotherapy or with other mental professionals. Therefore, An AI companion that has identified a mental health problem cannot share that information with third-party entities. The AI companions would be confidential and a safe place to deal with loneliness and mental health.
There is a lot of panic about how technology has impacted the minds of young people. Further, artificial intelligence has been villainized for decades. I argue that both of these fears are not fully warranted. Technology has changed the way in which we communicate, and yes there are harmful practices done by some technology companies. However, I think it would be foolish to ignore the powerful and effective tool AI companions could be for millions of young people struggling with loneliness. If done right, with the proper protections put in place both by the market and the government, there is no reason to fear AI companions. Screen-time itself is not to blame for this epidemic, it is the manner in which young people interact with their devices that have lead to loneliness. If AI companions can improve the way teens use technology, and provide an outlet for their stressors and anxieties, the loneliness epidemic can be reversed.