Large-scale AI and sharing of models

Yaroslav Bulatov
3 min readJul 21, 2019

Background

In “AI and Compute”, OpenAI reported that the cost of training AI models has been growing exponentially, observing a doubling period of 3.5 months.

At the current rate, in 4 years, training the largest model will cost more than launching a rocket into orbit. If the trend continues, it would change the way progress is achieved. Instead of many individual players training models, we will have a few places launching AI rockets.

GPT-5 training facility

AI research could become more like high-energy physics or astronomy where individuals develop their models “on paper” and wait in line for a shot on a particle accelerator or a telescope. In these fields, research is publicly funded and there’s a strong culture of sharing. Countries don’t need to launch their own space telescope, they can reuse data from the Hubble.

For AI research to continue at an optimal pace it would help to have a similarly robust culture of sharing. In addition to sharing methods or code, it’s useful to share model weights as well.

The reason is that we are seeing an increase in architectures which build on top of existing weights. For instance, object detection models often use Imagenet-trained classification network as a backbone; text applications build on top of trained checkpoints of GPT or Bert.

Training such models from scratch can be expensive. For instance, recently released XLNet was estimated to be $245k to train. Being able to reuse such models lowers the bar for research higher up the stack.

Downsides

The problem with a technology being widely available is “the bad guys get it”. It makes sense to keep the costs of dealing with them in mind. For instance weapon design technology is restricted — mainly bad uses. General manufacturing technology is shared despite bad uses — positive applications outweigh the negative.

An example of the cost incurred by sharing AI technology was from when I was on Google’s OCR team.

Our team was in charge of Tesseract, an open-source OCR product. An internal reorg had merged OCR team with reCAPTCHA team and we started looking at the effectiveness of reCAPTCHA. To our surprise, we discovered that users had been using our own OCR tool against us. In one case the team was puzzled how to respond to a user report of Tesseract failure on what was obviously a reCAPTCHA image.

Tesseract accuracy was below human but above random, so an automated solution could keep reloading the images until the OCR succeeded. This need to reload was also the weakness of the attack and the defense was easy to implement.

When vision networks started taking off, Julian Ibarz ran an experiment to study the feasibility of convnets breaking reCAPTCHA. The result was >99.9% accuracy for a neural net vs ≈80% for a human — captcha success was no longer a reliable indicator. Developing a defense was more involved. Google did not release the “reCAPTCHA breaking model.”

There’s a similar situation happening with text models right now — just like vision-based captchas before it, text based content filters are likely to become ineffective. We could soon be in a position where it would be impossible to distinguish human generated text from AI generated text.

An SEO shop could use such language model to flood the internet with autogenerated reviews and drive customers to businesses without good human reviews. Like with captchas before it, this would require investment in alternative solutions — for instance, relying more on provenance information and less on the content.

The Balance

Ultimately, the impact of information sharing comes down to the balance between negative and positive uses. There’s no sense in sharing “captcha breaking model” — what are the good uses? On the other hand, a generic language embedding has positive uses in addition to negative, so sharing decisions should consider the balance. A recent post from @Huggingface goes over the analysis they used to find the balance for their recent release.

Thanks to Ben Mann and Miles Brundage for feedback on earlier draft of this post.

--

--