Our Q+A with Gene Policinski

We caught up with the First Amendment expert to discuss COVID-19, censorship, and the importance of getting your news from more than one place.

Jared McKiernan
Our.News
6 min readOct 15, 2020

--

Leading up to the 2020 election, we’re putting out a special interview series in which we’ll talk with leading experts about misinformation, censorship, fact-checking, and plenty more.

For the first installment, we (virtually) sat down with Gene Policinski.

Policinski is the senior fellow for the First Amendment for Freedom Forum. He’s a veteran multimedia journalist, who writes and lectures regularly on First Amendment issues. He was also a Founding Editor of the USA Today.

We caught up with him via Zoom from his office in Indiana. Here’s what he had to say.

Gene, to start us off, do you mind telling us a little bit about your role at the Freedom Forum? How would you describe it, and what are you working on these days?

My job is really to be an observer. We don’t lobby or litigate. So I try to be an educator, whether I’m speaking to the nation or I just spoke to a third-grade class who had an interest in First Amendment issues. I hope to help people understand our freedoms. We know from our surveys that a lot of Americans aren’t really sure what First Amendment Freedoms are.

One of the things we’re dealing with now is a lot of back-and-forth hate speech. I think everyone would say we don’t want hate speech. But one person’s hate speech may be another person’s very blunt assessment of where things stand. All of it is protected by way of the First Amendment. So it’s a daunting job, but it allows me to speak to our fellow citizens about something that I hope we all share. And that’s an appreciation for the rights that cover the freedom of religion, speech, press, assembly, and petition.

There seems to be a new story or scandal almost every week now that ends up getting shared or retweeted by a prominent member of the media or a public official and then the post gets removed on Facebook or Twitter. Is it safe to say that these tech platforms would rather not censor, that they would rather not be, to quote Mark Zuckerberg, the “arbiters of truth?”

I think that’s clearly the role they initially stake out for themselves. There’s a law referred to glibly as Section 230 that states they are not responsible for the content on their site as long as they don’t edit or substantially join in the writing or dissemination. So I think what’s happening is most Americans really aren’t sure, but the First Amendment really doesn’t cover these tech companies. As private companies, they’re not governed by the same restraints that the First Amendment places only on government. I think these companies would rather not get engaged in making value judgments. But at the same time, a lot of people are arguing that they have a responsibility for countering misinformation for something that could be life-threatening. Of course, how do you do that? Facts are in dispute. So it’s a really tough role, but public institutions have had to face that challenge for some time. There’s a growing movement in the country to declare these tech companies as almost quasi-public utilities. Where they are private businesses but they operate under government instruction or control. That might bring them under the First Amendment. If it does, it’ll change the internet as we know it.

So how do we avoid getting to the point where tech platforms feel censorship is their only option or perhaps their best remaining option?

If you can think of a time before the internet, we had to do our own independent work. We had to find sources of news and information that we trusted. We had to not accept things on face value. Translate that to today: Just because it’s been tweeted, doesn’t mean it’s true. Having a multiplicity of sources rather than narrowing your scope to getting news from only one place. The burden is back on us. What tech platforms can do is make their platforms even more accessible to encourage various groups to join in an organized effort and perhaps present multiple points of view on a particular area. They can rely on groups like Our.News who collate and assess information. Why not organize and fund forums to help us sort through what is true and what is disinformation?

Do you think that social platforms should in fact get into the role of directly fact-checking suspicious content? Or is it better for a third party to handle this?

We’re dealing with a rather new phenomenon, in the scheme of both how news is distributed and consumed. It’s all of the above. What would present a clear and present danger? Take the idea, “You can defeat COVID-19 by drinking bleach.” That could kill you. That could do immediate harm. So there’s a role for anyone who presents news and information to be on alert for things like that. I think some of our issues in this first part of the life of the web, is we do it because we can, not because we should. Maybe we could ask more questions like, “What would happen if I retweet this to my friends?”

Do you think that removing posts effectively stems the flow of misinformation, or does it in fact create more of it?

I don’t think removing posts, with very few exceptions, is very effective. The internet moves at the speed of light. I would much rather see information that’s posted be countered.

Here’s the exception: when I was a newspaper editor, there was information I would never present. There was a young person who had committed suicide. Providing the details of how that came about could encourage or influence someone who was thinking about doing that. There was enough scientific evidence of the effects of the details of a news report, assuming it was newsworthy in the first place, prompting others to imitate or echo that behavior.

Many people have said that Twitter adding “caution labels” on problematic posts actually amounts to censorship, or a form of it. Is it fair to call labeling censorship, when the content of the post is otherwise still visible?

I actually only use “censorship” when it comes to the government, so I do draw a distinction between censorship and this type of activity.

Has the pandemic taught you anything new about how misinformation spreads or has it really just reinforced a lot of your thoughts and beliefs on the subject?

The filter by which I see things now is both wider and narrower. I only have so much time to look at the screen. I’m not distracted by travel or commuting, so if I want to, I actually have more time to look at more news sources and defeat misinformation, in effect, by having multiple sources to help me sort through what I can identify as the truth. If I’m doing my news and information consumer job, I should be using all of these great tools to a higher degree to verify information.

How do you go about respectfully calling out friends and family when they post misinformation?

You don’t want to type “Dear Uncle Fred, you’re an idiot.” It’s not going to get you anywhere and it’s going to make for a crummy Thanksgiving.

Always ask, “Have you seen this?” Present another perspective. If they’re not doing that diligent search for multiple voices, you can do that. I hope I’m not being pollyannaish to suggest that, but I think that’s the best way to go about it.

Tech shouldn’t, in and of itself, give us a license to act how we wouldn’t otherwise, or set new standards for how we deal with each other. This is just another way of sitting across from you in a room. We would never say in person what some people would say in a Tweet. I think we’re still learning this technology is not a replacement for human interaction — it’s just another way to do it.

I think we can educate each other. We’re learning how to do this in a very new environment. First, the internet was a toy. Then, it was a tool. Now, it’s an essential part of life. In a span of less than a lifetime we’ve had this tremendous transition to something humanity has never had before. I’m not surprised there are bumps along the way. But I think we go back to the values we’ve always had and ask, “How do they apply here?” Old values, new technology. I think that’s going to work us through a lot of these problems.

--

--