What happens if we let AI answer our texts?

Rūta Žemčugovaitė
Blloc
Published in
8 min readSep 3, 2021

If you're using Smart Reply in Gmail — might as well let GPT-3 write your whole email. How AI Mediated Communication is shaping human connection without vigilance.

For the past few decades our communication has been mediated via computers (computer mediated communication, CMC): emails, messengers, forums, social media. We adapted to the limits of each product, whether it’s a 280 character tweet, Instagram DM, a meme, or an email window. So far, we’ve learned that not everything we want to say, gets interpreted the way we want to. Technology changed the way we express emotion: it easily masks nonverbal communication and allows to maintain personas. It leaves room for our own imagination and projection (better or worse). Technology has always been the channel, yet it has never communicated for us on our behalf.

That all has changed with AI Mediated Communication(AI-MC). AI-MC becomes a third agent between two people. It can come in many forms, from autocorrect to fully generated email. Its goals are to make communication faster, avoid human errors, change the tone of voice, make it more polite, or just to purely automate it. Differently than CMC, AI-MC steps into the middle of the conversation, and becomes the mediator between us. (This excludes chatbots, reality-warping social media algorithms, email filters etc.)

Dimensions of AI Mediated Communication

So what are the ways AI-MC can operate?

Magnitude — how much we allow AI to produce and generate for us: from auto complete, to “Sounds good!” by Gmail, to fully generated content (such as copy, articles, or emails).

Agency — how much we allow AI to operate without our supervision. Currently, we refrain to prompts, suggestions, and nudges of how and when to reply, or follow up.

Media type — communication that contains more than text: altering and generating picture, video (think deep fakes), and audio.

Synchronicity — how much AI can participate in the synchronous time: live text editing, video filters, voice augmentation in real time.

Optimization Goal — the WHY. Why we create these filters in the first place? What is our goal? Is it social attractiveness, interpersonal success, connection maintenance, authority, or anything else?

Role Orientation — from sender to receiver: at which end does AI operate? Is AI writing for us and also reading for us? And do we disclose that there is AI mediation at work at all?

Blending in with Humans

So, what does it mean if AI can claim a portion of our interpersonal communication? AI can alter text — but also our tone of voice, our mood, and even our personality.

When we use features like Smart Reply, we internalize AI generated text to avoid cognitive dissonance. Why? Because ultimately, it’s us that send the “Sounds great!”. However, we don’t perceive it as a problem. Yet, the self image problem becomes more apparent when the tone of generated text sounds less socially acceptable: less friendly and more commanding or rude. Question arises, if we too, as easily can assimilate it into our identity. This becomes the top-down, bottom-up dilemma.

At first glance, AI-MC serves the sender, providing with efficiency, social likability, and time optimization. But how about the receiver? Interpersonal relationships form on the basis of time we spend to build, maintain, and deepen them. And because relationships are driven by personal agency and internal motivations, their validity can come into question when AI does the job for us. Just imagine a dating app deploying GPT-3 into chatting.

Communication Needs Creativity

AI-MC magnitude could become a big problem. “Sounds good!” seems to create no harm. However, it slightly opens the floodgates for more de-personalized relationships. As we find respite in autocorrect and autocomplete, then we trust Grammarly with our tone of voice, then we optimize messaging with Gmail’s Smart Reply, and finally entrust our identities to fully AI generated email (already available with the help of GPT-3).

There is a big difference between technology automating mundane, non-creative tasks and much more ambiguous human communication.

Communication is inherently a creative task. It requires ongoing learning and processing of new information — whether it’s a person in front of you or the society at large. Most of the ML/DL models are not meant to do that, they are static and operate within the limitations of the data they were trained on. So, if we want to keep more complex AI-MC up to date, we have to continuously retrain the model and fine tune it for more specific requests (in the email client and GTP-3 case). As for now, GPT-3 does not know that we went through a global crisis.

On the other hand — a user could be dealing with social anxiety, mental or physical health issues, high workloads, or any other difficulty that prohibits us to maintain relationships (such as pandemic). AI-MC could potentially facilitate “connection maintenance”. However, it’s not yet researched how AI can influence long term connection, and if it should only be used in unique and temporary circumstances.

AI as a scapegoat

We can easily imagine AI mediating within business settings: interacting with customers or users, solving problems, co-responding to emails — on the behalf of us. But how does it look like if we deploy AI into personal relationships? How does it affect our trustworthiness and authenticity?

A study from Stanford Social Media Lab (currently in-review) has found the senders of AI generated messages were judged to be less socially attractive (although, not less competent) and receivers wanted to interact with them less in the future. This highlights another blindspot in AI-MC: by solving the efficiency-dilemma we make the sender more productive. Yet, we slowly chip away their social status and trustworthiness, which might turn out to be a more painful loss in the long run.

From 2016 to 2019, Google ran Allo , a messaging app with a virtual assistant that generated automatic reply suggestions. In the study on trust (Hohenstein & Jung, 2019), researchers have found that receivers of AI generated replies (such as Allo) perceived the senders and their AI helper as less trustworthy, regardless the conversation outcome was positive or negative.

On contrary, if the texting outcome was negative, the senders shoved off the responsibility on their AI smart replies, which positioned AI as a scapegoat. This type of responsibility distribution has been coined as Self Serving Bias. Especially in Western societies, where people tend to attribute responsibility to themselves upon a successful outcome, and to externalize responsibility/blame when the situational outcome is negative.

Findings of the study suggest that when things go wrong in AI-MC, the AI is then — and only then — considered to have agency.

Scaling Without Responsibility

Since GPT-3 became open to [a selected] public, many startups have gotten an opportunity to gain access to similar features or products, just like Google has. This could seem like a step towards AI decentralization, however, foundational models (large language models, or LLMs) like this usually require massive amounts of data and computational resources, leading ownership of data and models towards centralization by large corporate entities.

If we look at the emerging products and startups from GPT-3, many of them rise as direct competitors, such as almost identical email generating clients OthersideAI (recently pivoted out of email generation into smat text generation, competing with copy AI) and Magic Email. If AI generated email can be the next step of evolution in AI-MS, why has Google not done it? Having access to millions of emails every day, it could easily train models to do just that. Smart Reply could be a potential bridge-feature before this type of product is launched. However, Google might be just too big for taking such risk for mass deployment. So, should startups take this chance? After all, startups are the ones that can experiment in this way, without facing as much of the public pressure, as tech giants would.

In large scale deployment, responsibility many times has to be catching up with power. With a lack of ethical and social considerations, we could easily be transitioning into an era of Deep Fake Communication, as well as ethical, social, and political challenges.

So, should your (or ours) startup deploy GPT-3 into its products?

To be honest, we thought about it. A lot.
We started with a desire to experiment with something groundbreaking, with an opportunity to build GPT-3 into our existing product.

As exciting as it is, releasing products that alter human communication in this new way requires a lot of consideration and long-term responsibility to develop (and keep updating) ethical state-of-the-art product. It also requires a lot of customization and extra training of the model for it to be suitable for human-AI-human interaction. Here we face the issue, where a feature like an AI personal assistant can easily be a product on its own.

It also seems that by creating GPT-3-based-products we would also be adapting to a pre-existing solution. GPT-3 was not created as a commercial product in the first place. OpenAI decided to monetize GPT-3, because they needed a sustainable business model — which is perfectly reasonable. We should ask: is this the solution we actually need for the product we’re developing and does it align with our values for mindful communication. Or does our innovation has a blindspot to the human aspects? As we mentioned in our first article — technology is a dynamic reflector of humanity. Apart from the obvious ingenuity of GPT-3, it also mirrors the extremist and biased parts of us. If we are building products with GPT-3 (or any other product that incorporates AI-MC), then we need to work around these issues with clarity.

Understanding the risks and considerations does not stop from going forward and innovating. It introduces awareness of the impact that AI-MC can have on a personal and societal level. Anyone deploying features that alter already complex (and many times nuanced) human communication, should be able to recognize inherent risks as much as the excitement of for the novelty.

Ratio is a productivity homescreen designed by Blloc — based in San Francisco and Berlin.
People first, apps second.

Get Ratio for Android

References
Hancock, J. T., Naaman, M., & Levy, K., (2020). AI-Mediated Communication: Definition, Research Agenda, and Ethical Considerations. Journal of Computer-Mediated Communication, pp. 1–12. https://doi.org/10.1093/jcmc/zmz022

Hohenstein, J., & Jung, M.(2019). AI as a moral crumple zone: The effects of AI-mediated communication on attribution of responsibility and perception of trust. Computers in Human Behavior, 106.
https://www.sciencedirect.com/science/article/abs/pii/S0747563219304029?via%3Dihub

Jakesch, D., French, M., Ma, X., Hancock, J.T. & Naaman, M. (2019). AI-Mediated Communication: How Profile Generation by AI Affects Perceived Trustworthiness. Proceedings of the ACM Conference on Human Factors in Computing Systems, 239, pp. 1–13.
https://sml.stanford.edu/ml/2019/02/jakesch-chi19-ai-mediated.pdf

--

--