Von: Neha Pawar
“Good Morning, Good Afternoon and Good Evening”: Association for Computational Linguistics (ACL) began with these greetings, to address all the time zones at the opening ceremony. ACL 2021 (originally planned as a live event in Bangkok), took place online like all the major events during the Corona times. Underline, Gather.Town, Zoom were the various platforms used for the talks and the social events.
Like every year’s report, let me start off with some statistics about the Natural Language Processing (NLP) research community in 2021. The acceptance rate of the conference continues to remain competitive (~21%) and most submissions continue to come from the US and China.
A new theme track was introduced this year called NLP for Social good (NLP4SG). It is focussed on discussions about the positive and the harmful effects of NLP on people’s lives in different dimensions.
Research about Ethics persisted as a hot topic. Considering this, the authors were encouraged to include an impact statement and to notify the readers about the effects their research can lead to. This is an amazing initiative towards responsible research.
Where is the focus in 2021?
In her presidential address, Rada Mihalcea (Rada Mihalcea | MIDAS (umich.edu)) addressed the concerns that we might be headed towards an AI/NLP Monopoly due to our focus on accuracy, while ignoring costs. She urged the research community to focus on variety of topics to stop this from happening (as seen in the below slide).
I was impressed with the many research papers presented during the conference, and here is one I am currently reading (which also received an outstanding paper award during the conference): Mind Your Outliers! Investigating the Negative Impact of Outliers on Active Learning for Visual Question Answering, Karamcheti et al, 2021.
During the conference, I got an expert perspective on topics such as:
1. Meta Learning: The aim here is not just solving a task but Learning to Learn. Basically, in Machine learning we design an algorithm to solve a task. But in Meta learning, we train a model which learns to design an algorithm to solve similar tasks. Bear with me. This is as confusing as the movie Inception and draws many similarities from it. Let me clarify this with the below Slide.
Here, the Meta learning model learns to learn a binary Image Classification task. It finds a function F for Binary Classification which can learn a function f for say Dog-Cat Classification Task. Let’s say it learns from a Dog-Cat and a Car-Bike dataset. This model can later be used on Apple-Orange Classification with very few examples (even though the model hasn’t seen Apples/Oranges in the training phase). I would be following up on this research in the next few months.
2. Dynaboard: This is yet another step closer to generating a holistic metric for our models. We all know the current problems with the metrics we use. For example, if we are to report our results using a single score like the accuracy, then we fail to communicate the fairness or the performance of the model. With Dynaboard, the performance of a given model is evaluated with an overall ranking function which depends on multiple factors like the accuracy, compute, memory, robustness, fairness etc. This is then used to develop a Dynascore which is a multi-dimensional score. Here you can read more about it: Dynaboard: Moving beyond accuracy to holistic model evaluation in NLP (facebook.com)
Looking forward to the next year’s Conference.