AI21 Labs’ Augmented Frozen Language Models Challenge Conventional Fine-Tuning Approaches Without Sacrificing Versatility

Although today’s large pretrained language models (LM) have demonstrated impressive zero-shot capabilities across a wide range of tasks, the performance of “frozen” LMs — whose weights remain unchanged — still trails that of LMs whose weights have been fine-tuned for specific downstream…