EMNLP Recap

The Wit.ai Team
Wit.ai
Published in
3 min readSep 23, 2015

The 2015 Conference on Empirical Methods in Natural Language Processing is the high mass for the NLP community. It happened last weekend in Lisbon, Portugal. Here are a few thoughts I’d like to share.

First, NLP is HOT. I could see the Sponsors/Hunters/Gatherers of Talent from many well known Silicon Valley companies… hiding in the dark alleys, waiting for their PhD preys. Hey I had to hide my Facebook badge because I think some researchers thought I was a recruiter and ran away from me!

Application-wise, apart from some people who seem obsessed with “Information Retrieval” (read: automatic surveillance of social media), the old usual suspects: machine translation, summarization, question answering, search, and a bunch of classical academic tasks like parsing, semantic role labelling, etc. Nothing really new.

So what’s new? The deep learning tsunami continues to take over NLP.

I’m proud that I saw the names of my FAIR friends and colleagues Yann LeCun, Ronan Collobert, Antoine Bordes, Jason Weston and many others cited on presentation slides over and over. The insider joke in Lisbon was that the E in EMNLP now stands for Embedding (instead of Empirical) — yes I know, when a full room in a restaurant laughs about something like that, you know you are in a special place. After all, 99% of modern NLP is empirical anyway. The opening keynote was delivered by Yoshua Bengio. Many papers were kind of “the state of the art for X was Y. We replaced the hand-crafted, manually hacked, heavily engineered Z by a RNN. It improved state of the art by 5 points.” The poor guys who presented deep learning-free papers invariably got the question: “did you also try with a [insert deep net technique here]?”

We are now in a new phase where beyond just using deep learning to improve some components (like the acoustic model in speech recognition), researcher start shipping complete end-to-end systems certified 100% deep learning (with 0 added rule based engine!), an approach pioneered by the now classic NLP (almost) from scratch.

I was a bit disappointed that nobody spoke about “grounded NLP”. In fact deep learning brings some much improvement potential, researchers may be tempted to improve existing systems instead of venturing into unchartered territories. I can’t wait to see them try to teach machines the actual, experienced, “felt” meaning of language. I took a Super Bock beer to forget.

Finally, I had interesting conversations about Facebook M, an ambitious project the Wit team is involved with at Facebook, with lots of researchers. We’ll need to improve and expand many aspects of Wit in order to develop M, and I’m glad the community will also benefit from that. That’s the beginning of an interesting journey.

As always, feel free to reach out if you have any questions, comments, or suggestions.

Keep hacking,

Alex Lebrun & Team Wit

--

--