Google Researchers Explore the Limits of Large-Scale Model Pretraining

Studies have shown that scaling up powerful pretrained models and their training data sizes significantly improves performance, and that these performance improvements can transfer to downstream tasks, even in few-shot settings. But is there a limit to the performance improvements attainable via such model size and training data scale-ups?