A.I. Will Kill Social Media As We Know It

Ross Mayfield
NewCo Shift
4 min readFeb 23, 2019

--

The way Open.ai released the results of their recent research was meant to freak us out, and it worked. The A.I. Text Generator That’s Too Dangerous To Make Public provides examples where with very little input, text is generated that is indistinguishable if not better than the quality of this writing. The code wasn’t released for others to inspect and build upon, but it’s likely that comparable models will be in other’s hands soon. If not already.

Automated Messages are coming to social media, and if social media is to survive, it surely will not be in its present form.

There’s a saying, always bet on text as the best medium for social communication. And many have. Social networks, social media, Amazon and Yelp reviews, and many other mediums use text as core to their value. But what happens when the cost to produce a message that seems authentic falls to zero?

All bets are off.

Italian philosopher and researcher at CNRS Gloria Orrigi observes since information wants to be free and therefore abundant, we are shifting our relationship to knowledge to rely on reputation:

We are experiencing a fundamental paradigm shift in our relationship to knowledge. From the ‘information age’, we are moving towards the ‘reputation age’, in which information will have value only if it is already filtered, evaluated and commented upon by others. Seen in this light, reputation has become a central pillar of collective intelligence today. It is the gatekeeper to knowledge, and the keys to the gate are held by others. The way in which the authority of knowledge is now constructed makes us reliant on what are the inevitably biased judgments of other people, most of whom we do not know.

She suggests a key is helping people discern the reputational paths.

What a mature citizen of the digital age should be competent at is not spotting and confirming the veracity of the news. Rather, she should be competent at reconstructing the reputational path of the piece of information in question, evaluating the intentions of those who circulated it, and figuring out the agendas of those authorities that leant it credibility.

We’ve known the need for social media literacy since it started. Howard Rheingold’s research and book Net Smart boils it down to Five Literacies: attention, participation, collaboration, critical consumption of information (or “crap detection”), and network smarts. The problem is literacy doesn’t distribute broadly. And what Hemingway called Crap Detection is about to become very hard, if not expensive.

In the early days of social media and social networks we had Bots & Fakesters, sometimes for fun. Back when Friendster banned Fakesters until they could monetize them, Tribe.net embraced them, real identity was lost as a potential social media primitive. Fakesters became a feature, not a bug.

And you could tell when content was being automatically generated. The firehose of fake would overwhelm the feed. And as commercial and political interests started to dip into the liquidity of attention, astroturfing was easily discernible. Like with spam, the attacks became more sophisticated, and social platforms relationship to Fakesters and good intentions not to censor led to halfhearted countermeasures. The platforms assumed neutral stances as common carriers up until the Russians woke us up.

This was even when it was recognized that content was the object to drive social network growth. The social networks became less conversational, from what are you doing? to what’s happening? on Twitter as a thing.

The move to content carried with it increased personalization, and recommendations, including horrific cases such as when algorithms think you want to die (“Social media platforms not only host this troubling content, they end up recommending it to the people most vulnerable to it.”). The platforms have some serious changes to make for how they target and filter, and some have made laudable moves.

With content came its traditional owners, which gained the DMCA Takedown Notice weapon for which platforms always swiftly comply.

The whole situation seems frivolous and absurd, but actually tells us quite a lot about how the internet works and the ways in which people weaponize copyright law to censor, hide things they’d prefer were forgotten, or threaten others.

I’m sharing this strange example to highlight that regulations won’t keep pace with this change either. This year in the US we’ll probably get some good personal privacy protection (somewhere between the EU and California) for our data. That may in-part protect us against the 3rd party targeting of automated messages.

But hyper-targeting is a weapon of the last war, and automated messages may mean a new front. What if you can target someone without a robust behavioral profile, and instead simply counter-message. In a few years we’ll all have our own personal trolls, armed with automated messages. Talking to us at length authentically, initially operator-assisted, and soon the input of our opinions and questions will be enough.

That targeting, btw, doesn’t need the real identity social networks don’t have anyway (save, LinkedIn). There are regulations to fine astroturfers, but in no-way can enforcement keep up with the onslaught, that might even be a good thing if you believe in the 1st Amendment.

In this cowardly new world, the counter-measure may be giving people the tools to defend themselves. Platforms that provide the path for automated messages must be responsible for revealing their reputational paths. And perhaps Open.ai should release the models so toolmakers can empower people to create their own automated messages. So you can respond to your personalized trolls personally with the same convenience they have.

Comments are open, and I for one, welcome them.

--

--

Ross Mayfield
NewCo Shift

Head of Product, Zoom. Previously LinkedIn, SlideShare, Socialtext, Pingpad, RateXchange.