Google AI Dopamine, GLUE, TransmogrifAI, Machine Learning for Health Care, NLP Interpretability, Probabilistic Thinking,…

elvis
DAIR.AI
Published in
4 min readSep 3, 2018

Great day awesome people and welcome to the 28th Issue of the NLP Newsletter! I am Elvis from Belize, Editor of DAIR.ai, and a PhD researcher in AI and NLP. Here is this week’s notable NLP news: Understanding human intelligence and using it for AI progress; machine learning for healthcare recap; reinforcement learning reproducibility; state of the art machine translation; automated machine learning; earthquake aftershock locations prediction, and much more.

🔝 — my top recommendations

🌟 — my favorites

On People…

Read more on why schools are using AI to track students writing patterns based on what they type into their computers — link

Irene Chen gives a recap on the important topics discussed at the Machine Learning for Health Care (MLHC) conference — from privacy to model robustness to clinical notes understanding — link 🔝

Nature releases a paper describing a deep learning approach to predict earthquake aftershock locations. The model is also useful to understand the underlying physics behind the phenomena — link

Yoshua Bengio discusses about the implications of disentangled representations for higher-level cognition. He also discusses how “natural language could be used as an additional hint about the abstract representations and disentangled factors which humans have discovered to explain their world”. — link 🔝

PyTorch is hosting its first developer conference where they aim to discuss research and production capabilities for their new release, PyTorch 1.0 — link

DeepMind, in collaboration with Harvard professor Wouter Kool, releases new paper investigating how human-decision makers deploy mental effort and how these insights can give way to opportunities and progress in recent artificial intelligence research — link 🌟

On Education and Research…

Google AI releases Dopamine, a Tensor-based framework that provides flexibility, stability, and reproducibility for new and experienced reinforcement learning researchers — link

In a new episode of the NLP Highlight show, researchers discuss the importance of establishing a benchmark framework, known as GLUE, for natural language understanding — link 🌟

Authors of a new research claims how information obtained from paraphrases can be used to improve multilingual machine translation — link

A new paper discusses the capability of text classifiers to recover demographic information from textual data with reasonable accuracy — link

A recent work compares the robustness of humans and current convolutional deep neural networks (DNNs) on object recognition under twelve different type of image degradations. They mainly test for generalization capabilities and how weaknesses observed in DNNs can be systematically addressed using a lifelong machine learning approach — link

Find out how text generation can be done using an alternative method, based on a hidden semi-markov model (HSMM) decoder, achieving similar performance to the standard encode-decoder models. The proposed model provides a method which allows for more interpretability and control — something the authors claim is important in a text generation task — link

Facebook’s research team releases a field guide to applying machine learning. It provides real-world best practices and practical approaches on how to apply machine-learning capabilities to real-world problems (video series) — link

On Code and Data…

DeepMind researcher, Shakir Mohamed, releases an impressive set of slides where he introduces foundations, tricks, and algorithms needed for probabilistic thinking — link 🔝

A comprehensive list of tutorials on how to build machine learning algorithms from scratch — link

Bloomberg researcher, Yi Yang, releases code and paper for his new work on modeling convolutional filters with RNNs, which he claims naturally capture long-term dependencies and compositionality in language — link to paper | link to code

PyImageSearch just published a new tutorial on how to perform semantic segmentation using OpenCV and deep learning. The method works for both images and videos — link

On Industry…

Salesforce’s Einstein AI team releases TransmogrifAI, an AutoML library that focuses on accelerating machine learning developer productivity through automated machine learning for structured data — link

Facebook researchers come up with a state of the art method for machine translation that only relies on monolingual corpora which can be useful to deal with low-resource languages — link | paper

Here is a nice list of Machine Learning rules and best practices for deploying real-world ML-based apps provided by Google’s ML team — link 🔝

Quote of the week…

Source

Worthy Mentions…

dair.ai releases new post on the state of deep learning based natural language processing techniques — link 🔝

MIT Review releases new article explaining all the important details on the new sensational machine learning method used to transfer one person’s motion to another (i.e., Everybody Dance Now) — link

Check out a collection of inspirational AI-powered Javascript apps in this cool website. Submissions use tools such as Tensorflow.js, Magenta.js, p5.js, and others — link

Skynet this week #7: OpenAI’s big loss, DeekFake dancing, AI drawing, and more! — link

The NLP Newsletter (Issue 27): Deep INFOMAX, Image to Image Translation, FEVER, Perception Engines, QuAC, Best 150 ML Tutorials — link

Sebastian Ruder’s NLP Newsletter (Issue #31) — link

Alignment Newsletter #22 — Research agenda for AI governance — link

If you spot any errors or inaccuracies in this newsletter please comment below. I would appreciate if you can help me to improve the newsletter by commenting your suggestions below. Otherwise, just help me by sharing the NLP newsletter. If you have any further questions DM me at @omarsar0!

--

--