Loose Retweets Sink Democracy?

Russian backed agents purchased Facebook ads, Google ads, ran Facebook Pages and maintain a vast number of Twitter bots. The goal of this content is to weaken the United States, to erode our faith in democracy and to pit Americans against Americans.

It’s clear social media changes the landscape of information. The old quip that a lie can make it half way around the world before the truth gets its boots on is more true today than ever before.

So what can we do about it?

Stopping It At The Source

Everyone believes that tech giants like Facebook, Google and Twitter can and should do more to stop this type of disinformation at the source. I’m sure they can do more but it’s a tough game of whack-a-mole that will never be 100% effective.

And, honestly, I don’t think that’s the biggest problem. Not by a long shot.

Sharing Russian Disinformation

The real problem is the number of people who share Russian produced content. If people didn’t interact with and share this content then it wouldn’t be effective. Or at least not effective enough.

Oops! Forgot to turn off location.

People are easily manipulated. This isn’t a revelation really. I mean, people are still falling for Nigerian Prince scams or bogus calls from the “IRS” who then ask you to pay your fine with gift cards.

It’s sad when people are cheated out of their money. But it’s vastly more dangerous when we’re being cheated out of our democracy. Out of our unity.

Klout

Remember when your Klout score was all the rage a handful of years ago. The company would track how your content was received on social platforms and based on that assign you a score.

Suddenly, you could have a visual score for how influential you were on a topic. Flattery will get you everywhere!

I found the whole thing to be a rather silly popularity contest that tracked activity versus influence. But I think about Klout as it pertains to our current problem of Russian content.

Pawns

Facebook, Google and Twitter can easily identify those individuals who engaged with and shared content that was found to ultimately come from the Russians.

What’s to stop these platforms from developing a gullibility score?

I understand that this is a rather provocative idea. Frankly, I’m not quite sure how I feel about it myself. Yet, isn’t the first step in this type of investigation determining if the Russians had accounts that were actively trying to promote this content?

You know, bots.

So they’re already identifying accounts that are knowingly helping to promote this content. As a result wouldn’t they also identify accounts that are unknowingly helping to promote this content?

These are real people who may be regularly falling for Russian content. What should we do about that?

Aiding and Abetting

The stakes here are higher than normal. This isn’t about policing what is right or wrong. What is fact or what is opinion.

This isn’t about stopping fake news. There will still be sites that churn out fake news for economic gain or to manipulate Americans for their own agenda. I have a dim view of these folks but I’d be hard pressed to lump them in with active Russian meddling.

This is about regular Americans accidentally helping a foreign enemy.

Shouldn’t they at least be alerted that they’ve been duped? There are hundreds of thousands of people who touched the proverbial hot stove but didn’t get burnt.

Without feedback people aren’t going to learn.

Fool Me Once …

The adage goes, fool me once shame on you, fool me twice shame on me.

Wouldn’t it make sense to let users know they’ve been fooled? At least then they might think twice about sharing that next piece of content that pushes all the right buttons.

Yet, I’m sure that even alerts of this nature wouldn’t have a large enough impact. It might slow things down for a time but it wouldn’t last.

So what happens when people know they’ve been fooled and then continue to be fooled moving forward? Shame on them, per the adage, right?

What should platforms do with users who regularly help spread Russian content aimed and weakening America?

They could publicly identify them to other users. Shame on them, right? This would give other users a signal to beware of content from that individual. It’s a reverse Klout score of sorts. But is hanging a digital scarlet letter on folks the right thing to do?

Alternatively, the platforms could suspend certain actions based on this behavior. Get duped too often and you can’t share someone else’s content. Participate in flame wars on this type of content and lose commenting privledges.

Uncomfortable

None of what I propose makes me comfortable. It’s dangerously close to limiting free speech.

Though in each of the cases above I think you’d still be able to use the platform for your own ideas. You just wouldn’t be able to promote or engage with other content.

Nevertheless, it makes me queasy.

But I’ve long been concerned with the penchant for people substituting someone else’s thoughts and ideas for their own. Critical thinking and personal responsibility seem to be dwindling.

So the easy virality of this content gives me pause. Are we doing enough to protect Americans in this new digital age?

Loose retweets sink democracy?

Foreign states seek to divide us and they’re finding out that we’ll happily do it ourselves given a little push. As we deal with cyber-security and hacking isn’t the greater threat social engineering? Shouldn’t we be partnering with our tech giants to help protect Americans from this clear and present danger?

My ideas might not be the right ones, but I think difficult conversions must be had about the reality we now face and how the person in the next cube might be unwittingly helping the Russians as they scroll through their feed.