Microsoft & Peking U Researchers Identify ‘Knowledge Neurons’ in Pretrained Transformers, Enabling Fact Editing

Synced
SyncedReview
Published in
4 min readApr 27, 2021

--

Large-scale pretrained transformers learn from corpuses containing oceans of factual knowledge, and are surprisingly good at recalling this knowledge without any fine-tuning. In a new paper, a team from Microsoft Research and Peking University peeps into pretrained transformers, proposing a method to identify the “knowledge neurons” responsible for storing this knowledge and how they can be utilized to edit, update and even erase relational facts.

The researchers summarise their contributions as:

  1. Introduce the concept of knowledge neurons and propose a knowledge attribution method to identify the neurons that express specific factual knowledge.
  2. Conduct both qualitative and quantitative analysis to show that knowledge neurons are highly correlated to knowledge expression in pretrained transformers.
  3. Present a method to explicitly edit (such as update or erase) factual knowledge in transformers, even without any fine-tuning.

--

--

Synced
SyncedReview

AI Technology & Industry Review — syncedreview.com | Newsletter: http://bit.ly/2IYL6Y2 | Share My Research http://bit.ly/2TrUPMI | Twitter: @Synced_Global