Comparing Pretrained Named Entity Recognition Frameworks (2021 Edition)

Leonard Yeo
Analytics Vidhya
Published in
2 min readJan 31, 2021

--

Whether you are doing machine learning NLP engineering and/or research, you may come across Named Entity Recongition (NER) topics. Some of you may be working on proof of concepts (PoCs) that involves integrating one of those open source NLP frameworks into an actual model inference platform.

Some burning questions could be,

  • I don’t have enough annotated/labelled data to train a NER model, but I still need to produce deliverable, how can I take existing models to showcase?
  • I don’t have the time to research on existing popular NLP frameworks, I want to know how they perform especially on NER topics.

If you have one of the burning questions mentioned above, then welcome to my article. 😊

In this article, I will be talking about basic 2 comparison tests that I would perform.

Test Environment

  • Ubuntu 16.04 Virtual Machine (AMD Ryzen 5 3600 4x processor cores, 8 GB RAM)
  • Docker version 19.03.6, build 369ce74a3c
  • Python 3.7

List of Pretrained NER Frameworks

There are 4 NER frameworks that I am aware of,

  • AllenNLP

--

--