The Best of AI: New Articles Published This Month (November 2018)

10 data articles handpicked by the Sicara team, just for you

Félix
Sicara's blog
7 min readDec 10, 2018

--

Welcome to the November edition of our best and favorite articles in AI that were published this month. We are a Paris-based company that does Agile data development. This month, we spotted articles about fake news, natural language processing, imagination and more.

Let’s kick off with the comic of the month:

1 — Prediction of 3D Shapes of Proteins

The 3D shape of a protein is extremely important. If a protein is misfolded, it becomes inactive and can be at the origin of a disease. Unfortunately, most of the time we only have access to the DNA that will produce the protein. Scientists have to use really expensive methods as cryo-electron microscopy, nuclear magnetic resonance, etc to discover the real 3D shape.

In this article, DeepMind introduces an AI model that can predict the 3D shape of a protein only from its genetic sequence. They use a neural network that can predict the distance between a pair of amino acids (the most basic components of the protein) and the angles between chemical bonds that link amino acids. They can then reconstruct the 3D shape of the protein.

It is a first worldwide and it has a tremendous potential for molecular medicine.

Read AlphaFold: Using AI for scientific discovery — from DeepMind

2 — Fei-Fei Li: AI and Ethics

WIRED looks back on the Fei-Fei Li’s career, from her youth as a young immigrant to her return to Stanford. The inventor of ImageNet fights for ethics in AI and tries to bring underrepresented youth to AI labs.

In June, she said at the US House Committee on Science, Space, and Technology:

“There’s nothing artificial about AI. It’s inspired by people, it’s created by people, and — most importantly — it impacts people. It is a powerful tool we are only just beginning to understand, and that is a profound responsibility.”

A perfect summary of her state of mind!

Read Fei-Fei Li’s quest to make AI better for humanity, — from WIRED

3 — BERT, the New Cornerstone of NLP

Google finally open sourced their Bidirectional Encoder Representations from Transformers (BERT)!

This pre-trained network is a new milestone in Natural Language Processing (NLP). As it is explained in this article, it can represent words taking into account the other words of the sentence contrary to context-free models such as word2vec. BERT generates a different word embedding for “bank” inside these to sentences: bank account” and “bank of the river”.

And the icing on the cake is that pre-trained BERT can be reused for nearly all your NLP models.

Read Open Sourcing BERT: State-of-the-Art Pre-training for Natural Language Processing — from Google AI blog

Here is the open source code release.

4 — Montezuma’s Revenge Solved

A team from Uber has finally cracked the Montezuma’s Revenge Atari game. This game is a sparse reward problem: rewards are infrequent and difficult to get.

To do so, they designed a new algorithm called Go-Explore. It has a memory and can thus come back to promising areas for exploration. To come back to such areas, the environment is replayed in a deterministic way. The algorithm makes the solution more robust to random events afterward with Imitation Learning.

Go-Explore has also performed very well on Pitfall that is harder than Montezuma’s Revenge because many actions lead to small negative rewards.

Read Montezuma’s Revenge Solved by Go-Explore, a New Algorithm for Hard-Exploration Problems (Sets Records on Pitfall, Too), — from Uber Engineering

5 — Quantum Neural Networks

Résultat de recherche d'images pour "quantum computing"

The elementary component of Deep Learning is a perceptron: a single artificial neuron. It takes an input vector which it multiplies by a vector of weights and returns a scalar.

An Italian team has implemented it on a quantum computer. The main advantage of quantum computing is that it can process an exponential increase in the number of features.

It means that extremely fast quantum neural networks will be feasible in the near future.

Read Machine learning, meet quantum computing, — from MIT Technology Review

6 — AI & Fake News

Did you know that Obama said “Killmonger was right!”?

Check this out: Youtube video.

Even if this video is true to life, it is not real. This fake video is generated by a deep neural network.

This article shows that it will be easier and easier to create realistic fake news largely because of progress in AI and analyzes how to behave towards this dramatic issue.

Read How Will We Outsmart A.I. Liars? — from The New York Times

7 — Zero-shot Learning with GANs

Researchers from Facebook AI have proposed a new Zero-Shot Learning (ZSL) model combined with Generative Adversarial Network (GAN) that can analyze text articles and then find pictures corresponding to the object described in the text (birds here).

This model, called generative adversarial zero-shot learning (GAZSL), is a ZSL because it can associate a text with a picture it has never seen before. The goal of the model is to extract visual features from the text and create a synthetic visualization of this text.

GAZSL gives state-of-the-art results and is open-sourced. You should definitely give it a try!

Read Zero-shot learning: Using text to more accurately identify images — from Facebook AI

8 — AI Can Imagine New Virtual Worlds

Nvidia has developed a new AI software that can generate an entire 3D environment. They succeed in recreating entire cities from scratch.

To do so, videos of the real streets are segmented by a neural network. Then, they train the AI (a Generative Adversarial Network) to recreate the environment from the segmentation (which only contains high-level semantics). Once trained, the AI can generate entire new virtual cities.

Read AI software can dream up an entire digital world from a simple sketch— from MIT Technology Review

9— AI Predicting AI

A team from the Microsoft Research Lab has invented a new neural architecture search (NAS) algorithm called neural architecture optimization (NAO).

This algorithm can optimize the structure of your neural network to make it more efficient. It is based on an autoencoder that maps a discrete neural network architecture into an embedding. A predictor takes as input the embedding and tries to predict the performance of the neural network. By doing gradient ascend in this latent space, an embedding with a better score can be found. Then a Decoder converts the new embedding into a new and better neural network structure.

In short: a machine learning algorithm predicting better machine learning algorithms!

Read Discovering the best neural architectures in the continuous space— from Microsoft Research Blog

10 — AI & New Situations

Overfitting in Reinforcement Learning (RL) is not well studied. That is why OpenAI has developed a new environment to test it: CoinRun.

In this environment, your AI trains on some situations (worlds where it must get coins) and is evaluated in situations it has never seen (for example, worlds with moving obstacles and lava). Hence it is possible to quantify its ability to generalize.

Here is the open-source environment if you want to give it a try.

Read Quantifying Generalization in Reinforcement Learning — from OpenAI

We hope you’ve enjoyed our list of the best new articles in AI this month. Feel free to suggest additional articles or give us feedback in the comments; we’d love to hear from you! See you next month.

Read the October edition
Read the September edition
Read the August edition
Read the July edition
Read the June edition

Read the original article on Sicara’s blog here.

--

--