From Panic to Profit

AI propaganda machines

BRITT PARIS
Data & Society: Points
5 min readJul 16, 2019

--

This is the second blog post in our series on AI and security. As AI becomes increasingly integrated into our everyday lives, we need to start reconceptualizing our notions of security. Read the other post in the series here.

In February 2019, OpenAI, an open-source, non-profit, artificial intelligence research group, released a language-based model (GPT-2) that can generate paragraphs of believable text in relation to prompts and mimic styles present in a couple seed sentences given to the model. OpenAI declined to publicly release the fully-trained model, training code, or full datasets, “due to concerns about malicious applications of the technology.”

They feared that GPT-2 would be yet another automation tool at the disposal of disinformation artists, who already use bots and other methods of automation to increase the spread of falsehoods. OpenAI reasoned that if the model were released to the public, it would make it easier for disinformation purveyors to automate the production of higher volumes of more believable content.

Real harms will come from maligning or impersonating those who are less well-equipped to address disinformation as it spreads online.

The immediate panic around GPT-2’s ability to generate disinformation is similar to the fear around the “deepfake” phenomenon — the recent spate of AI-generated videos that can be manipulated to appear as though people have been caught on video saying and doing things they never did. The tech industry, journalists, and politicians worry that, in the hands of amateurs and disinformation artists, deepfake technology will be used to impersonate political officials or fake events that break down democracy (i.e. tamper with elections, compromise national security, or foment widespread violence).

The harms that deepfakes pose are real, but are different from those that are typically circulated by the tech industry and media. Most coverage misses that text and videos involving matters of public concern or public individuals receive more scrutiny online. Overall, political officials and issues of political process are afforded more resources to take down and refute false information than would an ordinary person targeted with gendered or racialized harassment. As it is difficult to remove content across platforms, real harms will come when someone maligns or impersonates those who are less well-equipped to address disinformation as it spreads online.

GPT-2, like AI-generated video, is touted as the harbinger of a “post truth” world devolving ever-more quickly into chaos, in which democratic procedure and political figures are at the whims of nefarious actors. These narratives help position tech — specifically verification technologies provided by the AI-based industry — as the only possible solution to mitigate these serious risks.

In this context, we should focus on what economic openings are created by AI-induced panic.

With perceived dangerous technologies, panic justifies withholding the model from the public and encourages easy solutions or “quick fixes.” Today, tech organizations rushing in with solutions to AI-driven disinformation in the name of the public interest stand to benefit economically from venture capitalist, government, philanthropic investments, and tech industry partnerships. We should focus on what economic openings are created by AI-induced panic.

After all, much of our current technological infrastructure was built in response to panicked discourse over Cold War information security. Institutions capitalizing on this panic — Xerox PARC, IBM, and AT&T—became technological behemoths.

A month after news of GPT-2 broke, OpenAI stated in a blog post that they were shifting from a non-profit towards a “capped profit model,” which would allow them to profit from selling models, data, and other intel from their products—including GPT-2. This revenue, they stated, would allow them to pay back investors and encourage further investment.

When an organization announces to move from a non-profit to a capped-profit model, following the withholding of a contentious product, we should examine their motives.

OpenAI’s decision to give access to a limited model of GPT-2 solely to a small, select group of journalists and lock out experts, both within and outside AI and social research, makes it difficult to gauge how advanced the model really is. If GPT-2 was indeed withheld as an “experiment in responsible disclosure” as OpenAI characterized it, they should have withheld it completely until fixes for those risks were fully developed.

Responsible disclosure is a term from the information security industry in which security risks of a publicly-released technology are not shared with the public until there are patches for the vulnerabilities; often a second party is responsible for assessing these risks and helping determine fixes.

New technologies described as overwhelmingly advanced, conceptually inscrutable, and deeply conspiratorial make for headlines that draw attention.

If OpenAI’s claims about the risk of generating “believable fake news” were serious, an act of responsible disclosure would have entailed involving a second party of experts responsible for mitigating risks, according to an agreed-upon ethical framework, with a responsible release plan on the horizon.

The use of “responsible disclosure” suggests that security measures are being worked out, but in the case of OpenAI, this term was used to justify closing the technical model. Experts, users, and society at large will not have access to the model to parse out what those technical risks actually are, let alone to take steps to meaningfully address those risks.

Only the limited few at Open AI with full knowledge of GPT-2 will be able to develop fixes for any fallout that occurs.

We have to step back and think about how these technologies operate within and extend systems of economic and political power.

As panic around AI-generated fake news and videos have shown, new technologies described as overwhelmingly advanced, conceptually inscrutable, and deeply conspiratorial make for headlines that draw attention. As AI-supported disinformation technologies advance, it is possible we will see panic around these technologies wielded to justify technological closure in the name of “the public interest.” While caution and care is warranted, we should not accept fast and seemingly easy technological closures for these problems without pushing for social, cultural, legal, and historical explanations.

Meeting the complex social and technical problems of AI-supported disinformation solely with technical solutions obfuscates the need for corporate accountability for these problems and opens avenues for tech organizations or companies, who promise solutions, to profit off of the eventual fallout.

In light of the current panic around AI-driven technologies and the widely-held tech industry belief that solutions to social problems must be technological, we have to step back and think about how these technologies operate within and extend systems of economic and political power. We must be more responsible for introducing, explaining, and proposing paths forward for new and increasingly powerful technologies.

Britt Paris is a researcher with the Media Manipulation Initiative at Data & Society and will be joining the faculty at Rutgers University as an assistant professor in the department of library and information science in fall 2019.

Read the other posts in the series here.

--

--