Why Internet “Debates” Suck

Martin Rezny
Words of Tomorrow

--

And how to maybe make them suck a little bit less

By MARTIN REZNY

When I was choosing which field to study at a university, I knew I was interested in journalism, but I was forced to pick one more field to combine it with. Why journalism couldn’t have been studied on its own at my preferred uni, I still have no idea. In the end, I narrowed it down to political science and psychology. I only got accepted to the former, so I didn’t really have to choose.

Throughout the whole thing, while the journalism teachers remained completely walled off from the political science teachers, I always suspected that the area where media and communication theory meets politics might be pretty important. Little did I know at the time just how incredibly important the so called new media are going to make it in a rather short amount of time.

To be fair, there were some crossover courses, like propaganda. Which I got an A from. I’m still not sure if that’s a good or a bad thing. Anyway, there were relatively few of them, and they focused on history, past examples. The role of technology was presented as impactful (see technodeterminism and Marshall McLuhan), but politically neutral. Once you learn that, however, it changes.

I now believe that the role of advancements in media and communication technology is only politically neutral as long as most political actors don’t understand how the technology works, or that it is a factor at all. Once political people become tech-savvy, the technology becomes their fancy new cudgel. Whichever side strikes first only makes the other side respond in kind.

I hereby apologize for the destructive role most of the people with my education undoubtedly played in related developments over the last couple of decades as media and communication strategists for hire, mostly with zero scruples. Which wasn’t exactly surprising to me, given that the curricula for this type of education are very heavy on techniques and very light on ethics.

In case you’re wondering, my bachelor’s thesis was specifically about the (then) new phenomenon of conspiracy documentary movies, which I analyzed in comparison between the standard documentary format and propaganda, conspiracy theory, and mystification formats. It turned out these films were a bit of all of the above. As is now absolutely everything.

As an expert in the field who could have been responsible, in part, for the recent clusterfuck, if the events in my life unfolded differently, I feel I should at least try to use what I know to maybe help undo it. The reason I’m writing this is a tweet I just read from Lex Fridman, an AI researcher from MIT, who says he’s building ideas to improve internet conversations. Which is nice.

However, in my expert opinion, we’re not going to solve this with AI. Not that AI is useless, it’s just not likely to be helpful in addressing any of the core underlying problems which are shaping the current state of internet discourse. I will now attempt to systematically review what the hardest problems are, and why they’re so resistant to any AI-based solution.

1) THE PROBLEM WITH QUALITY

To put it as simply as possible, AI is great at working with quantity. It literally doesn’t compute quality. That’s because we don’t have any mathematics of quality. The main reason why our communication systems mainly care about numbers of views or likes is because it’s easy to calculate numbers. That’s it.

Traditionally, the quality of writing has been ensured by experienced human editors reviewing the writing. Today, most mainstream platforms are too big for that to be possible. Humans have their own problems, of course, like unreliability or bias, but unlike AI, we have this thing called understanding.

There’s exactly one method that AI can use to approach understanding — the so called content analysis. An artificial system can scan text, audio, or video for patterns, like which words are being used how often, in which combinations, or how well they’re spelled. And yes, spellcheck is great.

Unfortunately, this method is fundamentally flawed in that what gives the meaning to words is context. When AI is used for example to pick which posts or videos to censor, it can detect what was mentioned, but not how it was meant. That’s why YouTube may censor valid speech on touchy subjects.

At the same time, you still need people to either train the AI, or to make sense of the results it spits out. AI is not self-aware, and unlike what many AI enthusiasts believe, not on any clear path to become legitimately aware in the same way that we are — it is part our bias, part chaos, and part mimicry.

This means that it’s not going to figure out what quality is or how it works for us. It can learn to imitate what we have already decided is good, or, at best, it can identify higher-quality communications with a substantial rate of error by looking for quantifiable variables that sometimes correlate with quality.

The latter process can be of some help, like by focusing not just on clicks, but retention (length of time spent reading, listening, or viewing) or other so called engagement metrics (likes or comments), but only of some help. That a quality is “more” is not enough information. Qualities are of many kinds.

The key question that AI cannot answer is why any particular engaging text was more engaging. “More” is worse if it’s more of “engaging” fear, hate, or stupidity. Unfortunately, content analysis will always have a hard time distinguishing between bad content and the good content addressing it.

At the end of the day, you will always need humans with a developed understanding and culture of quality, serving as moderators. The solution, or at least a solution, would therefore be to scale up the quantity of this ability in the population. Every debate needs a good faith, good debater present in it.

There’s actually been more promising development in this area, both in how internet platforms and AI systems can aid us in learning and developing such skills, and in AI being the good debater. Even so, it’s still limited to serve as an effective fact provider, grader, or fact-checker, not a philosopher or teacher.

These roles often sound quite vague or esoteric, but rhetoric in particular has a theory and methodology that specifies, if not quantifies, what a good debater, philosopher, or teacher need to be able to do, and how to evaluate it. The AI debater project had very specific strengths and weaknesses.

Using content analysis, you can become a debater with all of the evidence to support arguments, or a tester of factual knowledge, or a historian of philosophy. The human faculties that lie entirely beyond that are value judgment, critical thinking, creative thinking, and strategic thinking.

Of these, you could maybe develop an AI communication strategist, up to a point, using game theory. The hard limit of game theory is the fact that life isn’t a game with set rules. That’s why AI is great at chess, or even Go — as long as rules don’t change, mastering more complex games only takes longer.

In real-life debate, only a beginner debater sticks to existing definitions and arguments and researches which evidence is already used to support them. A creative debater can come up with novel, yet reliably effective ideas. A strategic debater can find new ways to use them to maximize their effect.

But most important of all, a critical debater can counter any arguments that are presented, including new arguments used in new strategic ways, without any content research, using only informal logic. As the name implies, AI also doesn’t compute that. It’s limited to formal logic, proven to be incomplete.

I am personally of the opinion that one can develop informal logic or theory of quality to a much more sophisticated degree, even some kind of qualitative mathematics. Then again, maybe automation with more wisdom would just be more destructive when abused. We should try to learn from our mistakes.

2) THE PROBLEM WITH TRUTH

If you believe that AI can learn to reliably determine the truth value, I recommend watching this video:

There’s not much I can add to the technical reasons discussed in the video as to why any truth-determining AI is effectively impossible. In summary of the rest, there are competing pressures of click bait (what you want right now) and editorial judgment (what you should want), which are dynamic.

If the system only follows clicking, it spirals downward in terms of quality and truth value of its content. If the system only follows editorial judgment, people will leave the platform to a more permissive one. Trying to strike a shifting balance between truth and bullshit is therefore the best one can do.

Beyond that, there’s also the issue of the inevitable escalation that algorithms based on giving you more of what you clicked on previously lead to. Whatever you click on, you will only be offered progressively larger quantities and more extreme levels of it. The Algorithm can’t reflect on your choices holistically.

This is how people get locked within echo chambers, how they become more polarized and extreme over time. This doesn’t even require any untruth, only a selective exposure to only some of the truth, while ignoring any truth that doesn’t support the given ideological position. Truth is tricky like that.

Yes, there are some categories of things that objectively exist in one state or another, independently of anyone’s opinion. But guess what, interpretation of evidence by a scientist is an opinion. Not everything that scientific authorities say is objectively true, and that’s the most reliable source. It only gets worse.

There are truths that are important, possibly even universal, but not objective. You can’t prove that love exists, but does it make the concept not matter? What about freedom, or justice? It’s already a major success if debaters merely agree on their definitions. These so called value debates are beyond any AI.

But okay, let’s focus on a hypothetical example of a definitely true objective fact, correctly interpreted by the scientists, like “Earth is round”. When you try to communicate this truth, what matters more than the truth value itself is who’s saying it, and what relationship they have with the audience.

Charismatic science deniers can make people distrust scientific institutions. And let’s be honest, scientific institutions can also do it to themselves. Nobody likes an arrogant authority that insults them, while academia is not immune to or devoid of politics. There’s no AI fix for damaged reputation.

In the field of relationships, AI is at best very far from being relatable, highly emotionally intelligent, or demonstrably independent from the politics of its creators, so it isn’t likely to solve even this minor technical hurdle in truth-telling, let alone contribute to the key problem of truth identification.

An AI could perhaps learn to mimic human value judgment to some practical extent. But behind any of its “decisions”, there would be only a limited number of possible mechanisms, all of which are problematic — creator’s bias, majority opinion, or chaos. AI itself would stand for nothing, which matters.

In rhetorical terms, it’s the issue of ethos, one’s character and personal reputation. A trustworthy AI would need to have an emergent (not predefined) personality, at least as complex and structured as a human one, and also be able to somehow make its own decisions, and thus earn respect.

Only then would its opinions on the nature of subjective truths have weight, and only then would it be believed as a communicator of unvarnished objective truth. Until then, it’s just another tool of the disinformation machine, much like various media hacks and other talking heads.

It seems that truth communication is doomed to be an eternal struggle mired in compromise. Relax your rules too much, and you’ll let in chaos, bullshit, and bad faith debaters; become too strict, and the debate stops and people leave. An AI system will never let us off the hook, we have to fight for truth.

3) THE PROBLEM WITH CONSPIRACIES

Oh boy, is this a tough one. I have already tried to explain this a couple of times in some of my other articles here, but I guess I can always give it another go. Let’s start with the easy part, conspiracies happen. It’s generally beneficial for powerful people to commit crimes, so they often conspire to commit them.

Sometimes, they just embezzle some money. Sometimes they give a bunch of black people syphilis to see what happens. Sometimes, they get a bunch of people on LSD to see what happens. Sometimes, they make up fake reasons to go to war. Sometimes, they spy on every person in the world. It happens.

In that list, you may have recognized some specific conspiracies that we know about. That we know about. I’ll say it one more time, that we know about. It’s guaranteed there are more that happened that we don’t know about, or more specifically, that nobody was able to conclusively prove so far. Still with me?

Okay, now the hard part. There’s a number of conspiracy theories running around the internet, which are all always met with extreme derision and hostility from any mainstream news personality or academician. I’m not going to discuss any particular ones, and I don’t need to — my argument is statistical.

There’s every chance in the world that the mainstream is currently wrong at least about some aspect of at least one of them. There’s therefore nothing wrong with people who are investigating any of them, as long as their investigation is serious and rigorous. We shouldn’t try to stay wrong.

Every time you see an authority, especially a scientist, discourage someone from doing more research, you should treat it as a major red flag. To be clear, respectfully criticizing one’s arguments is fine, ridiculing people is not. Firstly, ridicule that’s punching down is abuse, and secondly, it’s not an argument.

It’s also a red flag whenever someone labels something a conspiracy theory so that we all can stop thinking about it. Science is about more research and more thinking, not the reverse. No theory is too ridiculous or too offensive to investigate, as long as you’re following the methodology of scientific research.

At the risk of sounding conspiratorial, a big part of why this topic is so vehemently ridiculed and suppressed is that there are those people who commit real conspiracies, who work very hard to deter anyone from any legitimate investigation and who have control over a lot of media channels.

Let’s put it this way — if not this, what do you think that intelligence agencies even do? Believe me or don’t believe me, but remember, I’m the “got A from propaganda class” guy. Don’t want to get caught? Delegitimize the whole idea of investigating a conspiracy. There is a number of ways to achieve this goal.

You reinvent the term “conspiracy theorist” to become derogatory; infiltrate media and academia and make sure to ridicule anyone who discusses any conspiracy theory; infiltrate conspiracy theory forums and groups and fill them with disinformation; and release tons of fake conspiracy theories online.

So, yeah, 5G isn’t spreading coronavirus and Q is full of shit, but the U.S. Navy has recently declassified a bunch of UFO videos and the term “UFO” is still perhaps the most highly censored one in terms of search results on all of YouTube. Just try to search it and see if you can find any independent video.

The AI-related implications of the reality of trying to have a free and reasonable debate against a covert adversary is that AI alone is again not going to cut it. Deleting or hiding all discussions online of conspiracies is just allowing the worst bad guys ever to win — the quality of arguments matters.

AI can detect what your conclusions are, but what it should be doing (but can’t do) is evaluating how well you’re making the arguments that lead to your conclusions, however crazy they sound. On the issue of conspiracies, authorities cannot be trusted, they have to be constructively questioned.

AI may be useful in detecting patterns of disinformation, however. You could maybe try to determine how much BS would be online if there was no organized conspiracy to flood the internet with it (if really only normal people were normally talking), or you could use it to trace where it comes from.

In case you’re wondering, that’s no secret. By now, everybody’s grandma has heard of bot farms trying to spread fake news and rig elections. You just have to realize that it isn’t just the Russians, or the Chinese, or the Americans, it’s all of them, all the time. Why? They all want to win, they’re not idiots.

WHAT I SUGGEST WE DO RIGHT NOW

To put it succinctly, what we can do today without having to wait for any magical technological solution is to learn from the literary tradition, mainly serious letter writing between thinkers, and apply it to our systems:

a) Texts shouldn’t be limited in length — on Twitter, one can be pithy, but that’s about the only positive literary quality that’s possible there. Nothing of any substance and clarity can be argued in a single paragraph, which is why people need to be trained to have the longest possible attention span. The rising success of long-form content is one of the best recent developments.

b) Long-form responding should be encouraged — beyond not having a length cap on individual messages, a response to a message should be a message, not a comment. Especially not anonymous comment. When two people converse as equals who know each other, that’s how you get substantive and civil discourse. I’m not sure how people would respond to it, but has anybody tried a relatively high minimum length of response limit? Also, Medium.com, please stop hiding response stories deeper and deeper.

c) Binary or multiple choice polls should be avoided — I understand that these are convenient, but this only trains people to think in progressively more reductive and polarized terms. No philosophical questions are yes-or-no or answerable by selecting options from a preconceived laundry list. Lists should be limited to questions of preference, like which topic to debate.

d) Liking is wrong — of all the forms of engagement, simple likes or dislikes are the most useless and divisive indicator. “Engagement” should require some effort, at least as much as commenting does, or it promotes shallow, polarizing content. This is how you get inanity like cat videos on top of the charts. It’s not that inane pleasures are wrong, when enjoyed in moderation (and privately), they’re just horrible as pillars of one’s culture and civilization. Quality cannot be represented by a single number, so let’s do something else.

e) If you want to ban something, ban bad faith debaters — this is something that a human moderator has to do, but if it’s possible, this is the way to improve the quality of any discourse. Censoring people who are probably wrong or possibly offensive, but who are that way in good faith, is not helpful, as that only polarizes debates and societies more.

Toxic influence is not any content itself, it’s people who would use any content, including truth (if they’re smart) to humiliate, intimidate, or deplatform other persons, or who have an extreme, divisive agenda. That’s how they can be completely immune to content analysis-based moderation.

Following this logic, banning a controversial comedian or scientist is not helpful, but banning anyone pushing any aggressive identity politics is helpful, even if they’re doing so, ironically, in the name of diversity and inclusion. It’s also helpful to ban Nazis — extremists on both ends of the spectrum damage any capacity of the society to achieve consensus.

The good faith versus bad faith distinction really is key here, as opposed to banning factually incorrect content (like engineers prefer) or banning potentially offensive speech (like political correctness advocates prefer). Healthy discourses include people who are just offensive or wrong. You can’t have a healthy discourse that’s dominated by bad faith debaters, as they’re only arguing anything to achieve an end, typically of any and all discourse.

To put it in the form of control questions, is the given debater trying to facilitate greater understanding, bring people together, or calm them down? Great, they’re helping the discourse. Are they attacking individual people, driving wedges between groups, and heating up the debate? They at the very least need a timeout to cool off, and probably are a bad faith actor. The nature of their political affiliation, or especially their physical body, is irrelevant.

--

--