PlayLab — Side Effects and Sideshows

What does a future look like where online content and copywriting is algorithm driven, not personally authored?

Spinning off of the increasingly common practice of A/B testing and targeted advertising, I’d like to explore a future scenario where online context is no longer generalized into a large room where people shout each other.

I see the basic functionality as such: by continually collecting information about individuals and their conversations with others through texting, spoken conversations, emails, etc. Powerful text-generation algorithms become able to replicate authentic and contextually-appropriate conversations on the fly.

This leads to a dramatically different content production pipeline for individuals. Rather than specifically writing a post or update on social media, they can simply express that they would like to make a comment about a particular subject, more along the lines of a keyword declaration. Then, as each reader encounters it, a unique post is generated, suited both to that reader’s taste, and phrased in the context that the ‘author’ would address that reader.

For example, this system would be very evident in discussion of political opinions. While currently posting about politics can be divisive and contextually challenging, this systems gives a sort of intimacy and delicacy back into the interaction. If I ‘write’ to my far-left college friend, my tone would be blunt, expressive, or sympathetic; but if that message were read by my repugnantly conservative grandfather, my words would be much more guarded, explained, and terse.

Additionally, once enough content is gathered from a ‘writer’ about a specific topic, any ‘reader’ could request a response from the writer on demand. For example, a friend could ask “Hey Kaleb, do you like Japanese food” and because there’s more than enough information to generate an answer, the algorithm could generate a contextually appropriate and likely verbose ‘yes’ without my input whatsoever.

What I find most compelling about this speculation is how deeply compelling it is while being frightening simultaneously. Through the power of AI and machine learning, it offers a scaleable ‘out’ to the looming problem with social media: context collapse. Additionally, it allows us to engage conversationally with people outside of the restraint of their time or willingness to communicate. However, it’s simultaneously dystopian. When our personalities are quantified and modeled, everything generated becomes inherently deterministic, we lose the very human capability of unpredictability. Additionally, questions of ‘truth’ and authorship spring up left and right. Can people be held accountable for words they did not explicitly write, but they do believe? What happens when two people meet and discuss the words or ideas of a mutual friend, who each ‘wrote’ them a different response? At what point does rephrasing or changing the tone of an opinion shift its meaning?

Side Effects & Side Shows —

I imagine mechanisms with be invented to combat some of these issues: automated posts may be considered ‘unsigned’ and authors may ‘sign’ posts to indicate they endorse or accept the representation that’s been presented by the algorithm. This could also backfire, where ‘authors’ try to blame rude or inappropriate posts on ‘unsigned’ systems.

Additionally, it would be interesting to observe how ‘authors’ adapt to their own representation. Does their generated voice amplify specific parts of their personality more than others? Does this become a feedback loop that causes their own voice to begin to change? Or do you try to steer your voice towards specific desirable or ‘cool’ tones?

Maybe my generated self is snappy and sarcastic, which I enjoy, and that drives my tone cyclically making this persona more dramatic. Alternatively, do I start incorporating slang more often because I think that’s hip or cool, so I try and bias my ‘echo’ by filling my conversations with more slang than average.

Alternatively, are there people who get obsessed with reading their own generated content? A sort of textual narcissus gazing forever into the river of their own thoughts. What happens to these people who get trapped reading regurgitations of their own content without contributing anything new? Do they reach a sort of personality singularity that is a small but hollow ‘core’ of who they are, reliving it constantly?

Additionally, what happens when the system latches on to specific ideas or phrases more than it should? Do people inadvertently develop catch-phrases? Do people trust the analysis more than their memory thinking “Wow, I must really care about _____, I talk about it all the time”.

What happens to people who develop their ‘echo’ as a child, and then rarely contribute new content to it? Does their voice lag in immaturity sounding perpetually childlike? Does this root them in their initial conceptions of ideas without encouraging them to change or rethink their opinions?

To what extent does advertising take a part in this system? Can popular ‘authors’ earn royalties on their content by allowing the algorithm to sneak in endorsements for products they enjoy. It’s still an honest opinion, but if it’s written by a second-party source does that transmute the authenticity?

Is their a premium market for buying tones or voice? Can I purchase a wittier online persona? A smarter one? A more articulate one? Does this help me change my social status or help me cross a language barrier? Do people react to this by becoming afraid of communicating in person without their algorithm sidekick?

—— To Be Continued ——