How might we design potential solutions to Misinformation?

CfD Conversation Series | March 26th 2021

--

Written by Estefanía Ciliotta

How dangerous is it to live around Misinformation? What are the consequences of promoting “fake news”? What if people could become more aware of potential fake news on social media?

Designing for conversations around challenging issues is one of our value propositions at the Center for Design at Northeastern University and we do so by providing a platform for healthy discussions and interdisciplinary collaboration. One of these platforms is our CfD Conversation Series. This month’s topic was “Designing Solutions to Misinformation”.

The expert panel that the Center for Design brought together to tackle the issue of Misinformation was astonishing. We had the privilege to bring together renowned researchers, designers, practitioners, journalists and social media policymakers from different countries — USA, Italy, Netherlands, among others.

They all share different perspectives on how to address this global issue — from UI and UX design interventions with AI to policies and governmental regulations — all panelists showed some of their current and past work regarding misinformation and potential ideas to challenge it.

Certainly, a critical point remained key for all participants: misinformation needs to be addressed through global conversations.

diagram column headers reading left to right counter, preassesment, consumption, post- assessment , decision. Each column is accompanied by bullet points
Image credits: Sara Colombo, Guangyu Cheng, Eindhoven University of Technology. This is a first version of a user journey describing the possible user’s interactions with misinformation on social media platforms.

Here, I lay out some takeaways from our Conversation Series.

How much do we want to have gatekeepers vs. citizens involved?

There is tension between defining how to regulate (and who to blame) when there are consequences due to fake news. At this point, the audience engaged in an interesting conversation regarding the intersection of design and policymaking. Followed by a discussion around how we might come up with design interventions that could help alleviate the tension.

They proposed the idea of having some sort of “gatekeeper” to be vigilant and alert when some information is fake. The question remains, though, should particular people be in charge of preventing misinformation spread, or should we provide people (users) with agency by raising awareness and choice?

UI and UX interventions to reduce misinformation on social media

Sara Colombo’s research about misinformation on social media brought together designers from all over the world to collaborate and brainstorm on potential design solutions to misinformation.

Some of the ideas the research team will prototype and test revolve around having interactive clusters that users can adapt by choosing the kind of news feed they want to see in their profiles, instead of having a social media app automatically populating their newsfeed. Another idea is to promote user awareness by having a “Healthy News Intake showing for example a dashboard of the time users spend in certain content or the topics and sources they tend to consume.

Finally, there were discussions about having some sort of “Collective Assessment of news might be another way in which users can engage with their networks and invite them to dispute or verify the content they interact with.

Certainly, these ideas may generate future potential challenges regarding, for example, validation and reliability, and this is yet another reason to continue the conversation to ensure we can come up with ideas that can help us all!

How do we protect users from “fake news” while respecting freedom of expression?

This question is the main struggle for Facebook’s Associate Content Policy Manager — Yvonne Lee, and it is certainly a challenging one. Yvonne shared Facebook’s approach to combating misinformation: remove accounts and content that violate Facebook’s Community Standards, reduce the distribution of content rated by independent third-party fact-checkers, and inform users with more context about what they are consuming. She also shared that Facebook is testing different labeling strategies to help users be more aware. However, she invited the audience to provide feedback and ideas on how to keep protecting users while respecting freedom of expression, which after all, is one of the main reasons Facebook exists in the first place.

In addition, Tommaso Venturini shared his research on “junk news”. He has played with and prototyped different content displays and layouts to see how people react to them. He introduced us to what he calls “junk news” as things that distract people from what is actually important and which play with the “collective attention”. He mentioned that this distracts people from actually important conversations that need to be happening.

Finally, he proposed a different perspective to Facebook’s approach to combating misinformation: remove trends-oriented recommendation algorithms, reduce addictive interface design, and inform users about attention marketing infrastructure.

Media ecosystem promotes rapid spread of fake information

To complicate things further, Roberto Patterson, M.S. in Media Advocacy from Northeastern University shared a couple of concerns related to our current media ecosystem.

Sharing information is so easy that it allows misinformation to spread even faster. Social media platforms are designed for rapid dissemination and engagement among people, they support the spread of information, whether it is real or fake.

In addition, Roberto pointed out that it is important to acknowledge and understand that people have different information authorities, which plays a huge role in how to manage misinformation. For example, certain information authorities might diminish efforts in combating misinformation if they contradict design interventions.

We can’t tell when something is “fake news”, but labeling can help

Seriously, we can’t. For example, Myojung Chung’s studies about Misinformation on social media showed that 84% of Americans believe they can easily identify fake news.

In her studies, she also found that labeling “fake news” was effective to catch people’s attention and for fact-checking purposes, which in turn decreased sharing intentions. In addition, labeling also proved to reduce social media metrics that promote sharing capabilities.

Furthermore, John Wihbey and Roberto Patterson worked on a research project about the Ethics of Content Labeling which dives deeper into an ethical analysis by evaluating different alternatives and approaches for content labeling on social media.

Yet, analyzing misinformation labeling and design can be hard because every culture is different and it dictates the way people interact and consume information.

Thus, some questions still remain: How might we be able to identify and label “fake news”? What would the best “label” be? How would it look like? What does it need to show to get people’s attention? How can we manage labeling in a global setting?

How narrow is the solution space? How can we open it up?

Finally, we discussed the importance of raising awareness for people to acknowledge and understand the risks and consequences of misinformation. The audience and panelists alike agreed that this matter is critical, and to do that, everyone needs to be involved. Therefore, if there are other sources that could impact the solution space, we need to bring them in.

Enabling conversations about this topic can become a big stepping stone in addressing misinformation. What do you think? Share your ideas with us here, via email, or on our other social media channels linked below.

Written by Estefanía Ciliotta
Center for Design, Northeastern University

Interested in knowing more? Watch the full conversation!

For more information on this event visit https://camd.northeastern.edu/event/designing-misinformation/

CfD Faculty Curator:

John Wihbey is a faculty member at Northeastern University, where he heads the graduate programs in the School of Journalism. He is the author of The Social Fact: News and Knowledge in a Networked World (MIT Press, 2019). He is currently Lead Investigator for the Ethics of Content Labeling Project at Northeastern’s Ethics Institute and a co-founder of the Co-Lab for Data Impact.

Panelists

Myojung Chung is an assistant professor of Journalism and Media Advocacy at Northeastern University. In her research and teaching, she focuses on how the emergence of new media has changed journalism and strategic communication. She is particularly interested in how online participatory behaviors such as commenting, liking, and sharing affect audiences’ processing of news or other mediated messages, and how to make messages more persuasive and effective in the digital era.

Roberto Patterson is a graduate student in the M.S. Media Advocacy program, a joint degree between the Northeastern College of Arts, Media and Design and the School of Law. He is a research assistant with Northeastern’s Ethics Institute, where he is studying the global dimensions of misinformation and platform policy.

Sara Colombo is an Assistant Professor of Design at Eindhoven University of Technology, where she works at the intersection of experience design, ethics, and emerging technologies, in particular AI. Her research interests concern the impact emerging technologies have on users when embedded in everyday artifacts, and their consequences on human experience and behavior at the individual and societal levels.

Tommaso Venturini is a researcher at the CNRS Centre for Internet and Society, an associate researcher of the médialab of Sciences Po Paris and a founding member of the Public Data Lab. He has been a recipient of the “Advanced Research” fellowship of the French Institute for Research in Computer Science and a lecturer in “digital methods” at the Department of Digital Humanities of King’s College London.

Yvonne Lee is an Associate Content Policy Manager at Facebook, where she works with academics and civil society organizations on issues surrounding misinformation and algorithmic ranking. Prior to that, she has worked at Andreessen Horowitz, the Office of Congressman Ro Khanna, and the Carnegie-Tsinghua Center in Beijing. She has a BA/MA from Stanford University, where she studied the spread of misinformation on social media platforms.

Join us! #CenterForDesign

️📩 centerfordesign@northeastern.edu

🔗 Linkedin 🔗Twitter 🔗Facebook 🔗Website

--

--

Center for Design @ Northeastern University
Center for Design

An interdisciplinary design research center for exchanging knowledge and practices, shaping common tools and methods, fostering new research lines.