Aviv Ovadya; Robert Pless; David Doermann; Douglas Guilbeault
Pless’s research seeks to collect large data sets, from iPhone apps, webcams, etc. to document events in new ways — — including tools to fight sex trafficking and characterize the visual appearance of anorexia in social media. For scenarios where there is a relatively large population of relatively savvy technology users, approaches like this can change the type, amount, and variability of data that can be brought to bear on problem, and fundamentally change who has the power to build the narrative surrounding a problem or an event.
Large scale Deep Learning approaches can be effective at problems that were considered intractable only a few years ago — — effectively recognizing faces at very large scales, automatically captioning images, recognizing most objects in a scene, and making very realistic false videos. These approaches have dramatically important limitations. Pless discussed reasons that the networks tend to exaggerate biases in their training data, and why imperceptible image changes can lead to incorrect results, and what approaches might help to determine realistic (but fake) videos from reality.
Robert Pless is the Patrick and Donna Martin Professor and Chair of Computer Science at George Washington University. His research focusses on large-scale machine learning and image analysis, with applications to security, medical imaging, and social justice questions.
Aviv Ovadaya — The Erosion of Reality
Aviv Ovadya is Chief Technologist at the Center for Social Media Responsibility at the University of Michigan School of Information where he works to ensure our online information ecosystem has a positive impact on society.
We are in a period where our trust in visual media is under attack due to the quality and quantity of tools that are available for making manipulating images and videos that we all take. DARPA is investing in helping to develop breakthrough technology that automates, scales and quantifies media manipulation.
The MediFor program is bringing together leading researchers from academia, industry and the government to attack this problem with the goal of narrowing the gap between manipulators and those that can detect them. One of the latest developments is the rapid advancement of machine learning in the computer vision domain. In particular, the use of Generative Adversarial Networks to generate photorealistic images and video at scale. Doermann will address the concerns and what DARPA is doing to address these issues.
Dr. David Doermann joined DARPA in April 2014. His areas of technical interest span language and media processing and exploitation, vision and mobile technologies.
Douglas Guilbeault — Automating Public Opinion
The New Science of Disinformation
Network science, artificial intelligence, and online experiments enable entirely new levels of precision and control in the domain of disinformation. Advances in computational social science are creating new threats and vulnerabilities for automated deception, while at the same time offering the knowledge needed to strengthen and protect the cognitive immune systems of societies.
Guilbeault reviewed the recent technological developments in disinformation campaigns, from AI-driven botnets to network seeding strategies. Throughout this presentation, Guilbeault examined the various ways in which political ideologies are woven into algorithmic design and intervention. In doing so, he identified the necessary conditions for achieving effective technical defenses against disinformation that are simultaneously informed by the value-laden nature of algorithms and their unintended consequences.
Douglas Guilbeault is a Ph.D. student focusing on computational social science at the Annenberg School for Communication, University of Pennsylvania, where he is a member of the Network Dynamics Group and DiMeNet (Digital Media, Networks, and Political Communication).