Towards Solving Critical Challenges In Social Media Content Moderation:

Adam Kendall
Shapes AI
Published in
5 min readFeb 26, 2020

A Q&A With Digital Privacy & Safety Thought-Leader, Dr Dipayan Ghosh

Shapes AI is delighted to announce that Dr Dipayan Ghosh joins us as a Strategic Advisor.

For some background before the Q&A: Dr Ghosh is the Pozen Fellow at the Shorenstein Center on Media, Politics and Public Policy at the Harvard Kennedy School, where he conducts research on digital privacy, artificial intelligence, and civil rights. His research and writing have been cited and published widely, with recent analysis appearing in The New York Times, The Washington Post, The Wall Street Journal, The Atlantic, The Guardian, Foreign Affairs, Harvard Business Review, Foreign Policy, Time, and CNN. He has also appeared on CNN, MSNBC, CNBC, NPR and BBC. A computer scientist by training, Ghosh previously worked at Facebook, where he led strategic efforts to address privacy and security issues. Prior to Facebook, Ghosh was a technology and economic policy advisor in the Obama White House where he served across the Office of Science & Technology Policy and the National Economic Council. He focused on issues concerning big data’s impact on consumer privacy and the digital economy. He has also served as a public interest technology fellow at New America, a Washington-based public policy think tank. Ghosh received a PhD in electrical engineering & computer science at Cornell University where he conducted research at the Wireless Intelligent Systems Lab and completed post-doctoral work at University of California, Berkeley.

We recently asked Dipayan a few questions relating to the critical challenges in the social media domain and how he believes they can be overcome…

Q: What is the key challenge with the rise of video content on social media platforms?

A: “As online video increasingly becomes the medium toward which customer attention turns and consumer internet firms accordingly pursue novel seamless integrations of video into user experiences to drive engagement, they will be faced with a new challenge: the spread of offending content through shared video. The wide dissemination of the Christchurch shooter’s video both live and after the fact — which triggered a global outrage against Facebook — was seemingly just the tip of the iceberg; beyond such terrorist activity lie present and growing troubles in the spaces of coordinated disinformation, hateful conduct, incitement to violence, spread of conspiracy, algorithmic discrimination, and more. Online video, even on the major companies’ platforms, promises to be host to a wide range of chilling content should technology firms fail to find the solutions necessary to suppress such offending content — particularly as bad actors become increasingly sophisticated.”

Q: How close are the social media companies to finding a solution?

A: “Troublingly, it appears the industry has a long way to go in developing scalable artificial intelligence to contend with the spread of even the most offensive visual content. The central tension is that the industry’s natural tendency is to drive engagement ever upward — but given the scale of this medium, driving engagement will necessarily challenge the firms’ capacity to exercise adequate censorship. We have seen countless examples of content gone awry; the situation in Myanmar, the WhatsApp cases in India, the radicalizing force of YouTube in such nations as Brazil, the Christchurch shootings, and of course the Russian disinformation problem in democratic societies — all of these constitute instances in which corporate content moderation operations failed to live up to the promise of protecting democratic interests. Though there are many factors contributing to these concerns, the industry’s inability to keep up with the spread of nefarious content is a tremendous problem we must address.”

Global outrage and media backlash as Facebook failed to prevent the live broadcast of Christchurch Mosque shootings on their platform

Q: Why do you believe Shapes AI’s solution can address this challenge?

A: “Shapes AI’s solution provides an efficient, scalable, modular, explainable system that can effectively be applied by technology firms to detect offending content in real-time. Shapes AI’s artificial intelligence system, which features a novel deep visual reasoning treatment, leads the industry in its capacity to understand the features, behaviors, and events associated with all entities in a video — and furthermore, infer the overall meaning of the scene itself. Already tested in a range of live contexts including crowd-monitoring, traffic-monitoring, road intelligence, and others, Shapes AI has been shown to offer tremendous potential value to public safety managers and individual users. The time has now come for integration into social media for detection, analysis, and reasoning of user-generated video — to the end of detecting and acting on offending content.”

Q: What characteristics does Shapes AI’s solution have that make it desirable in the internet platform context?

[Dr Ghosh identified four main characteristics]

A: [#1]“Intelligence. Shapes AI’s technology reasons over the visual input to infer and flag the behaviors, events and activities that occur in the scene, enabling highly effective autonomous decision-making. Applied to user-generated content, Shapes AI’s inferential capacity can address a wide range of typical offending video content disseminated on social media platforms.

[#2] Scalability. Given Shapes AI’s use of simple end-labels and domain knowledge terms, relatively minimal training data and implicitly encoded rules are needed to operate the content management system. This generalizable but domain-specified approach unlocks tremendous efficiency while still assuring accuracy in intelligence.

[#3]Interpretability. Shapes AI generates inferences in the process of deep reasoning that enable content managers to understand why particular conclusions are drawn — a critical feature in modern contexts where internet firms must be accountable to governments and the public.

[#4] Interoperability. Shapes AI’s proprietary system is deployable over the cloud, distributed devices, and on-premises. Further, it can analyze video in RGB, LIDAR, and RGB-D formats.

Shapes AI stands today as the most effective video-analysis artificial intelligence tool available in the market — and is positioned to support the needs of internet platform firms as they face some of the greatest technical and policy challenges of the day.”

See https://www.moderation.ai/ (by Shapes AI) for more info on the solution. Disclaimer: Dipayan Ghosh is affiliated with Shapes AI.

--

--