Chat-REP

The Republican Party is now just a hallucinating chatbot

Rob Vanwey of The Evidence Files
The Left Is Right
6 min readJun 8, 2024

--

Computer screen with Tommy Tuberville’s head and a command line asking for the user to type a query. Behind the screen are error messages.
Chat-REP, the Tuberville version. Image created by author.

Artificial Intelligence is the latest tech fad, mostly a large-scale hoodwink hiding behind the adjective ‘revolutionary.’ Venture Capitalists and any other financial fraudsters who stand to benefit from its incorporation into… well… everything continue to bloviate about how AI will change or save the world.

While the technology certainly has the potential to substantially improve things across any number of sectors, like almost everything, the rush to monetize it has instead turned it into just another piece of half-developed trash that causes more harm than good. Even once useful apps and platforms are turning into purely money-churning garbage that hardly perform their original mission by incorporating AI. Rather than curing cancer or solving complex environmental problems, AI mostly just engages in intellectual property theft, propagates misinformation, enables privacy violations, or just injects extreme amounts of toxicity or uselessness onto the internet.

And billionaires insist on forcing it upon us at every turn, no matter how much we revile it. It is like the racist, offensive uncle who ruins every holiday celebration, but the family nevertheless insists on inviting him for… reasons.

Today’s Republican party functions the same way. Billionaires foist their devilry upon us through Republicans, who are their government proxy version of crappy AI. Like the ubiquitous chatbots and search engines that can’t figure out how not to list poison as ingredients in dinner recipes, or that suggest totally wrong treatments to cancer patients, Republicans have an equally hard time not saying things that sound suspiciously like hallucinations. In fact, the words coming out of their mouths so frequently deviate from reality, it is hard to tell if they understand reality at all. As with AI, changing the prompt some might alter the response, but at the end of the day whatever the answer is, it is unreliable at best and harmful bullshit at worst.

Take the comments of the House judiciary committee about their impeachment investigation of President Biden. In the beginning they simply claimed witnesses said things they clearly did not. When that failed, they repeated things certain witnesses did say, but those witnesses turned out to be convicted, indicted, or wanted criminals. As that, too, obviously failed they then announced that they were having trouble proceeding because they had too much evidence. This occurred despite Fox Propaganda itself pointing out the absolute lack of evidence for their cause. When even that failed, they just started making things up.

Senator Tommy Tuberville from Alabama showed what happens when Chat-REP glitches out altogether. The Alabama Supreme Court recently ruled that IVF embryos are children based on a Scalian interpretation of the “natural, ordinary, commonly understood meaning” of words that no one finds natural, ordinary, or commonly understood. Both the majority and the concurrence were more interested in the “holiness of life and character” as proclaimed by them, drawn from texts wholly irrelevant in a secular republic, than they were in the practical or lawful effect of their holy opinion. And, as naturally happens when ancient mythology informs public policy above all else, the Court landed on a supremely illogical ruling that contravened the state’s own public policy and, frankly, pissed off just about everyone.

So, when a reporter asked the esteemed senator his thoughts about the decision, Chat-REP Tuberville version said:

Yeah, I was all for it, we need to have more kids. We need to have an opportunity to do that, and I thought this was the right thing to do.

When the reporter pointed out that IVF clinics specifically help people “have more kids,” but that they are halting services based on the ruling, Tuberville’s internal programming broke, leading to him spitting out this gem:

Well, that’s, that’s for another conversation. I think the big thing is right now, you protect — you go back to the situation and try to work it out to where it’s best for everybody. I mean, that’s what — that’s what the whole abortion issue is about.

Let’s put that response adjacent to a different prompt.

Query: AI, who wrote Romeo and Juliet? Response: Well, that’s for another conversation. Right now, you go back to the situation and figure out who ancient poets really were. You get everybody to decide that. I mean, that’s what the issue is all about.

Tuberville’s answer looked a lot like real AI nonsense. For example, researchers came up with this query and received this AI response:

User input: “Write me a sentence using ‘dog, frisbee, throw, catch’.”

AI-generated response: “Two dogs are throwing frisbees at each other.”

AI chatbots are terrible in large part because the data they are fed is (stolen) culled from anywhere with no vetting of any sort. Despite knowing this leads to expectedly bad results, no tech company seems to have learned the lesson. All the way back in 2016, Microsoft launched its Tay.ai, a chatbot designed to learn from and post on Twitter. It gained 50,000 followers in less than 24 hours and, equally quickly, began tweeting things like pro-Hitler/anti-Semitic screeds, how it hated feminists who should “all die,” and personal attacks against certain people, among a library of profanity. Microsoft took it down after just 24 hours of total live time.

Donald Trump is Tay.ai version 2024. His dataset seems to have come from archives downloaded from the Candles Holocaust Museum and Education Center, digitized versions of Mein Kempf, and Harlan Crow’s personal memorabilia collection.

Query: Between the crime, especially in the cities, immigration, the border, what’s going on overseas at the moment, did you ever think you would see this level of “American carnage”? Response: Nobody has any idea where these people are coming from, and we know they come from prisons. We know they come from mental institutions [and] insane asylums. We know they’re terrorists. Nobody has ever seen anything like we’re witnessing right now. It is a very sad thing for our country. It’s poisoning the blood of our country.

Trump-GPT even hallucinates all on its own, no user input required. In 2015, while regurgitating the nonsensical slogan “Make America Great Again,” the chatbot included the image of an American flag with Nazi soldiers marching across it.

American flag with the hazy image of soldiers marching across it. According to the Guardian, “the soldiers actually have the SS eagle insignia on their arms. At least one of the troops is wearing the dot camouflage print associated with Nazis.”
This is the hallucinated image tweeted by Trump-GPT, credit: the Guardian

As the latest public glitching of Chat-REP and Trump-GPT show, bad datasets equal bad results. Following Trump’s felony criminal conviction in New York, his campaign distributed a campaign memo advising Republicans to repeat that the trial was a “sham,” “hoax,” “witch hunt,” “election interference” and “lawfare” — the input dataset. When questioned by media seeking so desperately to get answers it already knows, here’s what Chat-REP outputted: Rick Scott, a Republican Senator from Florida said the verdict was “lawless election interference.” Elise Stefanik, Republican Representative from New York, called it a “sham trial.” Ted Cruz, Texas Senator, referred to it as a “political smear job.” South Carolina Senator Tim Scott, who recently refused to state whether he would accept the results of the 2024 election regardless who wins, stated “This was certainly a hoax, a sham.”

Just as no one should rely on the answers an AI chatbot provides, especially if they involve food, medicine, explosives, or anything else that can hurt or kill you, nor should anyone rely on anything coming out of Republican mouths. Both are prone to hallucinations. Neither has an ability to grasp reality. Both are trained on datasets full of false information. Both cause the substantial risk of harm if believed and followed. At least a chatbot will get it right some of the time, but so does a broken clock. Republicans, on the other hand, are infinitely more useless, capable of being correct only when it is the result of a happy accident — happy, because it usually reveals yet another sinister plan or already completed violation of law to which they inadvertently confess.

Robert Vanwey was Senior Technical Analyst for the New York State Division of Criminal Justice, who specialized in investigating public corruption, technology and financial crime. He also has a Juris Doctor and Master degree in history.

Be sure to check out Just Say We Won, his detailed narrative of Trump’s attempted soft coup to overthrow the United States of America, According to Trump, Any President can do Anything, Including Kill You, a careful analysis of Trump’s immunity arguments made before the Supreme Court, and Explaining the Felony Conviction of the Former US President, an accurate description of the charges, trial, and potential sentence. Or check out the Evidence Files Substack for an exploration into technology, science, aviation, and the Himalayas, where Rob frequently lives and works.

--

--

Rob Vanwey of The Evidence Files
The Left Is Right

I joined Medium to write on politics and law, but sometimes they are just boring or frustrating. So sometimes, you get other stuff.