SyncedReview
Published in

SyncedReview

Counterfactual Memorization in Language Models: Distinguishing Rare from Common Memorization

The practice of fine-tuning pretrained large neural language models (LMs) for specific downstream tasks has enabled countless performance breakthroughs in natural language processing (NLP) in recent years. This paradigm has also inspired machine learning researchers to explore such models’ ability to generalize by avoiding the…

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Synced

Synced

AI Technology & Industry Review — syncedreview.com | Newsletter: http://bit.ly/2IYL6Y2 | Share My Research http://bit.ly/2TrUPMI | Twitter: @Synced_Global