Reproducing SOTA Commonsense Reasoning Result in fast.ai with a OpenAI’s Pretrained Transformer Language Model

Check out our lab member Jack Koch’s effort of reproducing OpenAI’s Pretrained Transformer Language Model with fast.ai framework at his blog post.


Investigating ELMo for Semantic Parsing

For those who haven’t heard it yet, NLP’s ImageNet moment has arrived; approaches such as ULMFiT, ELMo, OpenAI GPT, and BERT have gained significant traction in the community in the last year for many language understanding tasks. In his blog, our lab member…


Congratulations to Michi for winning the CRA 2019 Outstanding Undergraduate Researcher Award!

Michihiro Yasunaga is the senior undergraduate student in LILY Lab working on the fields of automatic text summarization, syntax and semantics, and latent variable models…


LectureBank: a dataset for NLP Education and Prerequisite Chain Learning

Check out our latest blog about our AAAI-19 paper

What Should I Learn First: Introducing LectureBank for NLP Education and Prerequisite Chain…


A very successful research year for LILY

The LILY (Language, Information, and Learning at Yale) Lab so far this year has published eight papers on Natural Language Processing (NLP) at three top-tier conferences: AAAI (The Association for the Advancement of Artificial Intelligence), NAACL (North…