The Best of AI: New Articles Published This Month (January 2019)

10 data articles handpicked by the Sicara team, just for you

Antoine Moreau
Sicara's blog
8 min readFeb 7, 2019

--

Read the original article on Sicara’s blog here.

Welcome to the January edition of our best and favorite articles in AI that were published this month. We are a Paris-based company that does Agile data development. This month, we spotted articles about reinforcement learning, natural language processing, artificial intelligence legislation and more. We advise you to have a Python environment ready if you want to follow some tutorials :). Let’s kick off with the comic of the month:

“It’s hard to train deep learning algorithms when most of the positive feedback they get is sarcastic.”

1 — How AI thinks when painting?

The first portrait painted by an AI and sold at an art auction

Generative Adversarial Networks (GANs) are algorithms able to produce realistic outputs. For example, they were used to generate faces or produce fake videos of celebrities. They also created the first AI painting ever sold at an art auction.

Researchers from the MIT-IBM Watson AI Lab realized painting GANs could give humans information about how neural networks learn and think. Indeed they discovered that clusters of neurons had learned to represent specific elements (trees, wall, doors…). Those algorithms had learned, by themselves, to organize the pixels into sensible groups.

The team released an app, GANpaint, that gives the possibility to observe this phenomenon. By activating specific clusters of neurons of the neural network, you can draw doors, trees or clouds on pictures. This demo is puzzling!

And if you try to draw a door in the sky, nothing happens: The GAN also learned that it makes no sense to draw trees, windows or doors in the middle of the sky!

Give it a try, it’s worth it.

Read A neural network can learn to organize the world it sees into concepts — just like we do — from MIT Technology Review

2— Tensorflow 2.0 Learning by Doing

Tensorflow Team recently announced the release of Tensorflow 2.0. Introduced as a milestone, this new release is supposed to focus on simplicity and ease of use.

To get your own idea of improvements made, you can read this article that depicts the new features thanks to an implementation of deep reinforcement learning (DRL). I especially liked this article as it helps to understand what are the main changes of Tensorflow 2.0.

Tensorflow 2.0 is still in experimental stage, but you can already give it a try and answer the question: Will Tensorflow 2.0 make your life easier?

Read Deep Reinforcement Learning with TensorFlow 2.0 — from Roman Ring

3— Predicting stock price movement with deep learning

As Generative Adversarial Network are performant to generate lifelike data, have you ever thought to use them to generate the future price movement for stock? That is what the author of this article tried to perform!

It is rather unusual to use GANs to predict future stock price. Moreover, the author tried to leverage other deep learning, state of the art, algorithms to improve the performance of its model. You will read about Bidirectional Encoder Representations from Transformers (BERT) and Sentiment Analysis, about Reinforcement Learning, about Convolution… What a program!

And get ready, there are some python code snippets!

Read Using the latest advancements in deep learning to predict stock price movements — from Towards Data Science

4— Process sensitive healthcare data with Amazon Comprehend Medical

Healthcare sector is a huge field of interest for artificial intelligence. Recently it led to promising results in cancer or Alzheimer detection… But healthcare organizations are often slowed down by necessity to comply with protected health information regulations.

This should be easier now thanks to Amazon Comprehend Medical. This is a new AWS service that uses machine learning to extract medical information with high accuracy. This algorithm can, for instance, extract personal healthcare information. This is really good news, as it will allow to easily anonymize or de-identify sensible data.

Read Identifying and working with sensitive healthcare data with Amazon Comprehend Medical — from AWS Machine Learning Blog

5 — Natural Language Processing State of the Art models

In this project, you will find an updated presentation of state of the art methods used to perform Natural Language Processing (NLP). It explains how deep learning models like Word Embeddings, Convolutional Neural Networks (CNN), Recurrent Neural Networks (RNNs) are used to achieve a better “understanding” of human speaking.

You will also find reference datasets and a summary of state of the art results for main NLP tasks such as machine translation, sentiment analysis, and question answering.

I will definitely consider it as my new NLP Bible.

Read Modern Deep Learning Techniques Applied to Natural Language Processing

6 — Machine Learning fighting hackers

“Machine learning is a very powerful technique for security — it’s dynamic, while rules-based systems are very rigid”

A machine learning algorithm recently detected a hacker that had connected from Romania to the cloud account of a large retailer.

Former “rules-based” technologies that were designed to fight specific attacks are not enough adaptable to address new kind of intrusions. Moreover, strictness of those technics, lead to block or flag legitimate users.

New artificially intelligent software developed are able to adapt to hackers’ constantly evolving tactics. Those algorithms learn from massive amounts of data on logins, behaviors and previous attacks. They are able to distinguish more precisely legitimate and illegitimate users.

Hackers have better watch out!

Read Artificial Intelligence vs. the Hackers — from Bloomberg

7 — Endlessly generate complex and diverse learning environments and their solutions

A team from Uber AI Labs has been working on open-endedness problems. They were inspired by evolution on Earth that seems to be endless and that could be compared to a “creative genius unleashed”. Their idea was to generate an algorithm that would never stop learning ever-greater complexity and novelty.

With POET (Paired Open-Ended Trailblazer) algorithm developed by Uber Ai Labs, a randomly instantiated agent is first confronted to a trivial environment. More complex environments are then generated from the first one and the agent is therefore gradually trained.

What is enthusiastic about this algorithm, is that it could train robust agents able to solve problems not yet acknowledged by humans.

Read POET: Endlessly Generating Increasingly Complex and Diverse Learning Environments and their Solutions through the Paired Open-Ended Trailblazer — from Uber Engineering

8 — AlphaStar: the ideal StarCraft II team-mate

StarCraft II is a science fiction video game. It is considered to be one of the most challenging real-time strategy games. And that is what motivated DeepMind! The same company that developed AlphaGo finally managed to train an AI able to beat one of the world’s strongest professional StarCraft players. Let me introduce you AlphaStar.

AlphaStar is a milestone for Artificial Intelligence! Indeed, although AI algorithms had great results in lots of video games (Mario, Atari…), they never managed to overcome the complexity of StarCraft.

To obtain those results, DeepMind researchers built a deep neural network that has been firstly trained by supervised learning from human games. Its performance was then improved thanks to Reinforcement Learning (RL) techniques.

If I played StarCraft II, I would definitely try to get AlphaStar in my team!

Read AlphaStar: Mastering the Real-Time Strategy Game StarCraft II — from DeepMind’s Blog

9— Retrospect the year 2018 of Google AI Lab

Google Lens can help you learn more about the world around you. Here, Lens identifies the breed of this dog.

If you have been kept far away from AI news in 2018, this article, published on Google AI blog, has been written for you. It is a complete summary of researches led by Google last year. And the least we can say is that the year has been prolific! You will read about AI for Social Good, Natural Language Understanding, Perception, Quantum computing, and released open-sourced datasets.

I don’t know what you are going to think about it, but, as far as I am concerned, I am looking forward to seeing what 2019 has to offer :)

Read Looking Back at Google’s Research Efforts in 2018 — from Google AI Blog

10 — First MIT AI Policy Congress

Compared to the range of skills needed in most jobs “Today what machine learning can do is much more narrow”

At the beginning of the month, scientists and policymakers gathered at MIT to discuss regulation of Artificial Intelligence.

They all agreed on the potential of Artificial Intelligence to solve issues that humans could not figure out so far: cure cancer, help to protect jeopardized species…

But their discussions were essentially about how not losing control. They studied the ethical and social issues raised by AI. For example, they addressed the risk that intelligent machines would massively replace workers.

This article gathers a lot of quotations of speakers and conclusions reached by the participants. This is really interesting to read what scientists and policymakers think of our future life along with Artificial Intelligence.

Read AI, the law, and our future — from MIT News

--

--