Are we really able to fix social media?
Our personal weekly selection about journalism and innovation. Join the conversation on Facebook and Twitter.
edited by Marco Nurra
- The New York Times talked to 9 experts about how to fix Facebook: Jonathan Albright (Research director at Columbia University’s Tow Center for Digital Journalism), Kevin Kelly (Co-founder of Wired magazine), Ro Khanna (Democrat representing California’s 17th Congressional District, which includes sections of Silicon Valley), Eli Pariser (Chief executive of Upworthy and author of “The Filter Bubble”), Kate Losse (Early Facebook employee who recounted her time at the company in her book, “The Boy Kings: A Journey Into the Heart of the Social Network”), Alice Marwick (Assistant professor of communication at the University of North Carolina at Chapel Hill), Ellen Pao (Chief diversity and inclusion officer at the Kapor Center for Social Impact and a former chief executive of Reddit), Vivian Schiller (Adviser and former news executive at NPR, NBC News and Twitter), and Tim Wu (Professor at Columbia Law School and author of “The Attention Merchants: The Epic Scramble to Get Inside Our Heads”).
- “How to Fix Facebook” — First: Don’t assume it’s broke. Second: Break up its features, recommend David Cohn. “I’ll try and give a more practical solution than what is proposed by many of the NYT suggestions. I respect Tim Wu, for example, but I don’t think FB will become a public benefit corporation.”
- Is Facebook flagging fake news, or just filtering it? by Shane Greenup:
“Personally, I would prefer to live in a world where fake news stories and misinformation fail to spread because my fellow citizens refuse to share it, not because they are prevented from ever seeing it. And while it may not be clear that we can build a world where no one falls for misinformation, preventing people from seeing it guarantees that they will never be able to learn the necessary skills to deal with it. The approach that we take to deal with this problem may be the difference between a thinking, responsible, capable population, and passive, gullible, blindly believing population.”
- Working against mis- and disinformation online. At MisinfoCon in London on Wednesday 25 October, around 80 people from different countries came together to debate the challenges and opportunities of fighting misinformation online, from getting to grips with the numbers behind the problem, to media literacy and understanding the role of visuals and memes. Here are three questions (and answers) that came up at MisinfoCon: Do debunks help or harm?; How do you avoid creating cynics?; Could we create a financial incentive for media literacy?
- First Draft report offers 35 recommendations to counter misinformation. A new report provides a comprehensive analysis of how online misinformation has become widespread — and what technology companies, media organizations and governments can do to combat it. The report, published by First Draft and commissioned by the Council of Europe — an intergovernmental organization focused on human rights — draws upon prior research, news articles and conferences to address what we know about “information disorder.”
- The complexity of information disorder online. “After months of reporting on the impact of misleading, manipulated and fabricated content spread among peers on technology platforms, we’re beginning to see governments and regulators grapple with these issues. It’s not a moment too soon, as it’s more important than ever that we start to think about information disorder from a more sophisticated perspective,” Claire Wardle (First Draft) writes.
- Official attempts at containing fake news on WeChat could easily slip into the territory of censorship. With features reminiscent of WhatsApp, Facebook, and Twitter, WeChat combines the intimacy of mobile messaging and small-group interactions with the capacity for viral dissemination. But WeChat is notoriously opaque. Data is not only something to be monetized, but also coveted information ultimately under control of Chinese authorities. Official attempts at containing fake news on WeChat could easily slip into the territory of censorship. WeChat has long invested in efforts to detect and remove “fake news,” including a feature for users to report false information and tools that automatically detect keywords to remove associated content. However, within the purview of WeChat’s fake news filter are not only banal rumors such as “padded bras cause cancer,” but also politically undesirable content.
- Aspiring to a “journalism of facts” is the wrong ideal for journalism. We need journalistic sherpas helping citizens navigate through a miasma of shock talk, trolls, and partisan diatribe.
- When fake news is funny (or “funny”), is it harder to get people to stop sharing it? Fiery Cushman, associate professor of psychology at Harvard and head of the Moral Psychology Research Lab, pointed out that people may be sharing fake news simply because they find it funny whether or not it is accurate. Pennycook pulled up a screenshot of a fake news story claiming that Mike Pence credited gay conversion therapy with saving his marriage. Is there a way to get people not to share fake news that they think is funny, Cushman wondered — and would that be a different kind of intervention from getting them not to share fake news because it’s not accurate?
- Facebook, Google and Twitter testified before Congress. The three companies have already admitted that, unknown to them, Russian-backed accounts used their respective sites to share and promote content aimed at stirring political unrest. These are some of the tweets and Facebook ads Russia used to try and influence the 2016 presidential election. What’s unknown: whether or not they actually worked.
International Journalism Festival is the biggest annual media event in Europe. It’s an open invitation to interact with the best of world journalism. All sessions are free entry for all attendees, all venues are situated in the stunning setting of the historic town centre of Perugia.