Why does XLNet outperform BERT?

State-of-the-art NLP Model — XLNet

Edward Ma
DataSeries

--

BERT (Devlin et al., 2018) is a method of pre-training language representations, meaning that we train a general-purpose “language understanding” model on a large text corpus (like Wikipedia), and then use that model for downstream NLP tasks that we care about (like question answering). BERT outperforms previous methods because it is the first unsupervised, deeply bidirectional system for pre-training NLP.

XLNet demonstrates state-of-the-art result and exceeding BERT result. It is a BERT-like model with some modifications. We will go through the following items to understand XLNet

  • Autoregressive (AR) vs Autoencoding (AE)
  • Permutation Language Modeling
  • Two-Stream Self-Attention
  • Absorbing Transformer-XL
  • Comparison with BERT
  • Experiment Result

Autoregressive (AR) vs Autoencoding (AE)

AR language modelings (LM) estimates the probability distribution of a text corpus. For a sequence of inputs, AR LM either factorizes a forward product or backward product of probability distribution. In other word, it encodes text in uni-directional ways. However, this limitation causes model unable to…

--

--

Edward Ma
DataSeries

Focus in Natural Language Processing, Data Science Platform Architecture. https://makcedward.github.io/