OpenAI Guards Its ML Model Code & Data to Thwart Malicious Usage
Fans and followers of machine learning are doubtlessly aware of artificial intelligence’s formidable strength in generating content, from synthetic texts and phony speeches to unreal but photorealistic images. The fakes are so good that some AI researchers have raised an alarm — worried their R&D efforts might fuel malicious use with potentially grave consequences.
OpenAI today announced their latest AI invention — a large language model that can generate realistic paragraphs of text. The model’s selling point is that without any task-specific training data, it still demonstrates compelling performance across a range of language tasks such as machine translation, question answering, reading comprehension and summarization.
The San Francisco-based AI non-profit however has raised eyebrows in the research community with its unusual decision to not release the language model’s code and training dataset. In a statement sent to Synced, OpenAI explained the choice was made to prevent malicious use: “it’s clear that the ability to generate synthetic text that is conditioned on specific subjects has the potential for significant abuse.”
Although OpenAI’s restricted publication approach seems to go against the “open-source spirit” that unites the global AI research community, mounting concerns regarding AI’s widespread societal impacts suggest that such caution may be a sign of things to come in frontier AI research.
Fast takeaways on OpenAI’s new language model
Synced took a peek at OpenAI’s new paper Large Language Models are Unsupervised Multitask Learners. Here are some fast takeaways:
- Last year, OpenAI released a giant natural language processing (NLP) system that obtained state-of-the-art results on different language task benchmarks. The idea is to take advantage of sufficient unlabeled text corpus to train a general language model, and use transfer learning to fine-tune the model on each task. The technology is called “general pre-training”, or GPT.
- Following on GPT, OpenAI trained a much larger language model, GPT-2, which significantly improves on the realism and coherence of generated text.
- OpenAI trained models of four different sizes (small, medium, large, and extra large). GPT-2 is the largest language model and has 1.542 billion parameters, which is 12 times larger than GPT. In comparison, Google’s high-profile language model Bert has 340 million parameters.
- The new model was trained on a new dataset called “WebText,” which contains eight million documents for a total of 40 GB of text extracted from leading social media platform Reddit.
- The new model follows the same architecture as the GPT model — a left-to-right transformer — with a few modifications. (Transformer is an architecture proposed by Google Brain in 2017 based on a self-attention mechanism proven well suited for language understanding tasks.) OpenAI Researcher Alec Radford told Synced that novel techniques include pre-activation, zero domain transfer, and zero task transfer.
- The model obtains SOTA results on seven of eight traditional language modeling benchmarks without any modifications or fine-tuning.
- The model can also perform a wide range of NLP tasks such as reading comprehension, translation, summarization, and question answering without requiring direct supervised learning. OpenAI Research Engineer Jeff Wu told Synced the discovery led the team to speculate that an unsupervised learning model might one day outperform models built with explicit supervision by maximizing the likelihood of a sufficiently varied training set.
- There are a few shortcomings. Radford told Synced the model is only designed for general purpose and not domain-specific synthesis. So it might create bizarre and confusing texts if the premise falls into particularly arcane domains.
Why is OpenAI not releasing its code and dataset?
Tech companies sometimes choose not to release their research-associated datasets or codes based on purely commercial considerations. But for a non-profit OpenAI, the decision reflected concerns emerging around AI tech itself: “We can anticipate how these models may be used for malicious purposes, and can conceive such systems being used to generate misleading news articles; impersonate others online; automate the production of abusive or faked content to post on social media; and other as-yet unanticipated uses,” OpenAI Policy Director Jack Clark wrote in an email to Synced.
While various malicious usages of AI already exist, OpenAI sees the misuse potential of their large language model as more pronounced. Below is a demo from the model fine-tuned to write Amazon reviews. Given a few simple guidelines on product, rating, and summary, the GPT-2 generated reviews that appear remarkably authentic.
Clark told Synced that OpenAI’s restricted publication approach is consistent with the organization’s greater mission. As the OpenAI Charter spells out: “We are committed to providing public goods that help society navigate the path to AGI. Today this includes publishing most of our AI research, but we expect that safety and security concerns will reduce our traditional publishing in the future, while increasing the importance of sharing safety, policy, and standards research.”
OpenAI will still release a paper outlining their research methods and the architecture used along with a reduced-capability model. The organization may also consider opening wider access on a case-by-case basis.
So far the AI community’s response to the OpenAI decision remains mixed. It’s clear that the open-sourcing environment long-nurtured in the world of computer science remains sacrosanct to many. Last month, a Reddit user claimed Google AI refused to share some dataset fields for an ACL’18 paper and associated challenge at CVPR’19. Many comments attacked Google: “Reproducibility is the hallmark of science. Without that data this result is not reproducible so the science is sh*t.”
OpenAI NLP Team Manager Daniela Amodei meanwhile told Synced their team consulted a number of respected researchers regarding the decision, and received positive responses.
Responsible publication on AI?
Cautious publication decisions are already common in highly-sensitive domains such as cybersecurity and biotechnology. OpenAI believes the general AI community should also start considering more appropriate publication standards for future AI technologies.
Recent events have revealed AI’s unseemly side to the world. Last year, Reddit User “DeepFakes” shared a face-swapping tool online that flooded the Internet with realistic-looking fake celebrity porn videos. Meanwhile, some believe the bot-driven spread of misinformation and fake news may have swayed election outcomes in the US, Italy and elsewhere. Increasingly sophisticated AI-empowered synthetic tools could exacerbate these issues.
OpenAI last year published The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation. The report notes that “addressing this challenge may call for reconsidering some of the extant publication norms in the AI community.”
With the global AI community is still grappling with the open vs restricted question, OpenAI’s specific publication approach on the language model is something of a trial balloon. OpenAI says it will evaluate the results in six months to determine their next step.
Click this link to view the paper.
Journalist: Tony Peng | Editor: Michael Sarazen
2018 Fortune Global 500 Public Company AI Adaptivity Report is out!
Purchase a Kindle-formatted report on Amazon.
Apply for Insight Partner Program to get a complimentary full PDF report.
Follow us on Twitter @Synced_Global for daily AI news!
We know you don’t want to miss any stories. Subscribe to our popular Synced Global AI Weekly to get weekly AI updates.