Intelligence is the ultimate dual-use technology.

Lars Buttler
AI Foundation
5 min readAug 23, 2018

--

We must Defend Reality

Much has been said about the existential threat of potential future Artificial General Intelligence (AGI). This potential future event is often called a ‘singularity’, which is actually correct only in the sense that we cannot know with any precision what would happen afterwards.

We do know, however, that existing Artificial Intelligence (AI) — those narrow AI technologies and tools in use today (at AI Foundation and elsewhere) — come with both massive rewards and grave dangers. Like powerful general-purpose innovations before (including domestication of animals, the steam-engine, electricity and networked computers), existing AI affects all industries and all segments of society and is a driving force of rapid growth and revolutionary change.

Existing AI gifts us with new knowledge and can help us solve many of humanity’s most deep-rooted problems. By allowing us to automate repetitive, tiring, dangerous parts of our jobs, AI can make work more interesting, and more like hobby and play.

Amid warnings of the economic disruption that robots and automation could unleash on the world economy as traditional roles disappear, researchers are finding that new technologies will help fuel global growth as productivity and consumption soar.

AI will contribute as much as $15.7 trillion to the world economy by 2030, according to a recent PwC report. That’s more than the current combined output of China and India.

However, existing AI can also destabilize our human society so much that no measure we could take against future AGI might hold unless we address the risks of existing AI now. Such destabilizing dangers include bias and deep manipulation, the loss of shared objective reality, and a labor revolution on a scale not seen since the Industrial Revolution (but occurring in a much shorter time-frame).

AI Foundation believes that working in AI comes with a responsibility, that we must build the foundational elements of Human-AI collaboration now, and create Guardian AI to help offset these risks.

Artificial Reality and Fake Truth

The debate over Fake News and media bias has been growing more intense since the 2016 US Presidential election, and the most recent Cambridge Analytica/Facebook scandal, along with overwhelming public concern, prompted even the US and UK governments to recently get involved. Major lawsuits have been brought in the US (against the Trump campaign, Wikileaks, Russia and Alex Jones) and the UK (against Facebook) as well.

The current discussion around low-tech, human-generated Fake News and its impact in the last election cycle is just the beginning. When politicians, celebrities and others deny their real actions recorded on tape, on the grounds of alleged AI-spoofing, the results can also be extremely damaging. All this can quickly lead to “Reality Apathy”.

“People are particularly sensitive to any areas of your mouth that don’t look realistic,” — lead author Supasorn Suwajanakorn, AI Foundation Contributor. (Image: UW)

AI already allows for the generation of complex artificial sounds, images and videos, creating anything from artificial environments, e.g. turning actual winter into apparent summer, to ‘real’ artificial voices and ‘real’ artificial people. In other words, we can now create Artificial Reality™, as we call it, increasingly at a level of detail that human ears and eyes cannot discern from the non-artificial real thing. When used responsibly, with all the right disclaimers, Artificial Reality can be incredibly powerful, rewarding, and fun!

“Fake Truth” including AI-generated fake text, audio and video, created to mislead us, or to discredit a political candidate in the decisive days of a campaign, along with other elaborate scams and swindles, can be incredibly disruptive and destabilizing. If the authenticity of media can no longer be trusted, we can no longer agree on shared objective reality, a critical necessity for a free society.

When individuals no longer care whether they are exposed to Fake Truth, there is the potential to disrupt the democratic process and cause harm to our society, democracy, and human agency.

AI Foundation’s non-profit initiatives are focused precisely on this: Human — AI collaboration to defend us from imminent dangers such as Artificial Reality and Fake Truth.

Reality Defender

At the core of our social responsibility efforts is a nonprofit established to anticipate and counteract the dangers of AI by giving all of us the tools to protect our lives, prosperity, dignity and human agency. We spend considerable resources and energy thinking through, and educating the public about, the risks of AI, especially those that could potentially destabilize human societies or prevent us from enjoying the vast benefits of the advancement of AI.

As we are taking a long-term leadership role in building and promoting beneficial AI, we are committed to building and publicly releasing beneficial AI products that offset the risks of AI, and to hold every creator of advanced AI tools and products to higher standards of safety engineering and risk prevention.

Example of Computer Vision and Pattern Recognition (CVPR) courtesy of Matthias Niessner, AI Foundation Contributor (image: NiessnerLab.org)

Reality Defender™ is key in the fight against Fake Truth and Reality Apathy. This is the first of our Guardian-AI tools, and an example of effective and beneficial Human-AI collaboration on a societal scale. Released initially as a browser plug-in and completely free for individuals, Reality Defender warns all of us when we are exposed to fake media, and lets us easily report suspected fakes.

Reality Defender relies on the effective collaboration of individuals, the academic community, advanced-AI companies, creators, and beneficial AI agents. From the onset, it uses the best of software, expert systems, and AI methods to detect and evaluate suspected fakes. However, simply relying on these available methods is not going to be enough as the “arms race” to create ever more perfect fakes continues. In partnership with the world’s’ leading AI experts and research organizations, we are building the most advanced fake-detection methods, using Deep Learning, generative AI techniques, and adversarial methods.

Successful Human-to-Human and Human-AI collaboration is essential, which is why Reality Defender is built on AI Foundation’s Human-AI collaboration platform. Beyond this fundamental collaboration, the platform allows integration of the best innovations from other AI companies, teams, agencies, etc. We partner with leading AI companies and creators to establish and use an “Honest AI watermark” to clearly identify and call out AI-generated text, images, audio, and videos.

Our growing community of users reporting suspected fakes, flagging false positives, and training advanced AI agents are extremely critical to the ongoing success of defending reality. We call on each and everyone of you to do your part. Be among the first to sign-up and help us safeguard the future with Reality Defender at realitydefender.org.

--

--

Lars Buttler
AI Foundation

cofounder/CEO at AI Foundation, cofounder at Trion Worlds, cofounder/chairman at Madison Sandhill, cochair at BENS’ National Technology & Innovation Council