Grokking Deep Learning, Something-Something V2, Tensorflow 1.9, NLP ImageNet Moment, Feature-Wise Transformations,…

elvis
DAIR.AI
Published in
4 min readJul 13, 2018

Welcome to the 20th Issue of the NLP Newsletter! Here is this week’s notable NLP and AI news! Today we have book releases, AI safety and ethics, tons of large-scale datasets, overview of imitation learning, research tutorials, calls for research, and much more.

On People and Society…

Andrew Trask releases his new book 📘 entitled “Grokking Deep Learning” where he aims to teach deep learning and related mathematical concepts in a more intuitive way using Numpy (notebooks included) — Link

Elvis Saravia discusses the challenges of AI communication by reviewing the current technologies and strategies used to communicate science-related topics — Link

Zachary Lipton discusses four major problems trending in machine learning research (a must read) — Link

Check out Sebastian Ruder’s new article on why he thinks NLP’s ImageNet moment has arrived — Link

NLP is already being used for clinical documentation, but what are the safety concerns for patients? — Link

Moustapha Cisse, founder of Google AI Ghana, introduces new one-year intensive Master’s Program for Machine Intelligence in Africa — Link

Jack Clark, communication director of OpenAI, argues how Facebook DensePose can be used for “real-time surveillance” which brings forth troubling implications — Link

Timely and interesting TED talk by Dirk Hovy on why computers are still struggling to understand us (video) — Link

On Education and Research…

Check out this impressive list of tutorials on some of the most outstanding ML research of the past few years, such as DeepStack, InfoGAN, and AlphaGo Zero (highly recommended) — Link

Zaid Alyafeai teaches how to build Keras models and migrate them to the browser in this very easy to read article — Link

Yisong Yue gives a broad overview of “Imitation Learning” techniques and applications (presented at ICML 2018) — Link

Learn how to site datasets used to conduct linguistics research. Actually, it is a standard that can be used for NLP and AI research too since there are no clear guides on how to go about doing this — Link

Access all ICML 2018 tutorials here (videos) — Link

Conference season is still around! Learn how to make the most out of your conferences in this well-elaborated guide by Jenn VaughanLink

New research proposes variational attention networks for retaining performance gains on machine translation and visual question answering tasks while boosting computational speed — Link

On Code and Data…

The results are out for the “Implicit Emotion Recognition Task” — Link

Want a dataset which you can use to teach machines common sense? TwentyBN has just released a massive video dataset (Something-Something V2) to help enable systems to have the ability of video understanding and visual common sense — Link

Facebook has released a massive dataset of URL shares for conducting misinformation and news analysis (Calls For Proposal available now) — Link

PyTorch implementation of how to learn distributed sentence representations — Link

FAIR releases dataset which can be useful to train AI Agents to teach other visual navigation — Link

On Industry…

Facebook is calling for proposals to help build systems that can understand and detect misinformation on WhatsApp — Link

Learn about how NLP can be used to leverage and unlock the unstructured healthcare datasets — Link

Google AI opens new object detection competition which includes a massive training dataset — Link

Learn about how Passage AI is using NLP and deep learning, specifically bi-LSTMs, to train state of the art conversational bots that can converse in all major languages, such as English, Spanish, and Chinese — Link

Tensorflow 1.9 has just been released, which includes an excellent starting guide on how to use tf.keras and eager execution — Link

Worthy Mentions…

How to conduct socially responsible NLP research — Link

Bi-weekly NLP Newsletter by Sebastian Ruder — Link

Distill releases new research on “Feature-wise transformations” to conduct context-based processing, which is a technique to process one source of information in the context of another — Link

DeepMind’s new paper discusses how to measure abstract reasoning in neural networks — Link

Check out these slides if you want to learn more about emojis and how NLP can be used to understand them in the context of social media text — Link

Calls for Research…

Ever wondered why Donald Trump and others use uppercase letters a lot on Twitter as reported here? I believe this can make a for an interesting NLP study, where we can analyze several accounts on Twitter and look for some linguistic patterns that can give us more clues to understand and explain this phenomenon. Steven Pinker thinks capitalization is used to express irony but I believe it has to do with expressing emotion intensity and persuasion as well.

We have a come a long way with the NLP newsletter (We are on the 20th issue already 🙌). It’s worth a little celebration with emojis 🎉🎉🎉. These newsletters are certainly taking more time to prepare as the demand and supply of high quality information and news increases. I am very happy for the support and I hope to keep informing you with the latest and best NLP and AI news out there. You can also reach me on Twitter if you want to further discuss any of the items above.

If you spot any errors or inaccuracies in this newsletter please comment below

--

--