The Best of AI: New Articles Published This Month (March 2019)

10 data articles handpicked by the Sicara team, just for you

Emna Kamoun
Sicara's blog
7 min readApr 9, 2019

--

Welcome to the March edition of our best and favorite articles in AI that were published this month. We are a Paris-based company that does Agile data development. This month, we spotted articles about Neural Networks, Neural Architecture Search and more. We advise you to have a Python environment ready if you want to follow some tutorials :). Let’s kick off with the comic of the month:

“At some point, compression becomes an aesthetic design choice. Luckily, SVG is a really flexible format, so there’s no reason it can’t support vector JPEG artifacts.”

1 — Exploring Neural Networks with activation atlases

Neural Networks currently provide the best solution for image-related problems: image classification, object detection… But, up until now, it has been hard to understand how the decision process between layers works.

OpenAI and Google researchers present Activation atlases a new way to dive into neural nets and visualize how the decision-making process works. It shows the various concepts each layer learn to finally classify an image.

You can even start making your own activation atlases using the jupyter notebooks they provide. I know I will.

Read Introducing Activation Atlases — from OpenAI

2 — Microsoft’s Seeing AI

Microsoft’s Seeing AI is a free app created by Microsoft for blind and low vision community. Based on object and scene recognition algorithms, it narrates the description of a photo just by tapping on it. It can also describe a person and its mood.

Even cooler, now you can get a description of the image’s objects and how they are related which leads to a better understanding.

Read Blind users can now explore photos by touch with Microsoft’s Seeing AI

— by Devin Coldewey

3 — Trained neural nets perform much like humans on classic psychological tests

The ‘gestalt effect’ is the idea that the human brain can perceive a whole image from certain fragments of it. This theory dates to the 20th century but the current question is: can neural nets do the same?

Researchers from Google Brain discovered that when a neural network is trained to recognize complete triangles, it can also classify an illusory triangle (image A) as a complete one.

This result shows how neural nets imitate the brain once again and it may be ‘a first step into a new field of machine psychology’.

Read Trained neural nets perform much like humans on classic psychological tests — from MIT Technology Review

4— Coconet: the ML model behind the Bach Doodle

In March 21st, Google published the first AI-powered Doodle to celebrate the well-known composer and musician Johann Sebastian Bach.

Based on the Coconet ML model, this doodle can harmonize any melody you create in Bach’s style. Briefly, to train this model they use a dataset of 306 chorale harmonizations by Bach. For each piece, they erase some notes randomly and let the model predict the missing notes and restore the music.

A detailed explanation of how and why the model works is published by Google Magenta.

Read Coconet: the ML model behind today’s Bach Doodle — from Magenta blog

5 — Neural network design automation

One of the trending researches in artificial intelligence is automatically building neural networks which is called Neural Architecture Search (NAS). This is a very promising field but it demands a lot of resources. In fact, it took 48,000 GPU hours to build a convolutional neural network by Google.

The big news is, MIT researchers presented an algorithm that can build a CNN 200 times faster than state-of-the-art method. This is a huge step in the field since NAS will be accessible to more people.

Two main innovations led to this huge optimization which are “Path-level” binarization and pruning and Hardware-aware. Find out more about them!

Read Kicking neural network design automation into high gear— from MIT News

6 — Turing Award Goes to AI Pioneers

Yoshua Bengio, Geoffrey Hinton and Yann LeCun

One of the biggest news of this month is the announcement of the prestigious Turing award winners Yoshua Bengio, Geoffrey Hinton and Yann LeCun. It is also called the “Nobel Prize of Computing” and it’s a $1 million prize.

The ‘pioneers of AI’ believed in an artificial intelligence approach using neural networks and they’ve been proved right! In 2012, results showed that neural nets are actually good at image recognition and since then they’ve been used everywhere.

Read The three pioneers of deep learning have won the $1 million Turing Award— from Technology Review — The Download

7 — AWS Deep Learning Containers

In response to their customs needs, AWS has created new containers adapted to Deep Learning projects.

You can choose the docker image that suits you considering the framework to use (TensorFlow or MXNet), the environment (CPU or GPU) and some other parameters.

Plus, these containers are easy to use and you can find a detailed explanation on how to use them in the article.

Read AWS Deep Learning Containers— from Jeff Barr

8— Brest cancer detection

It’s always good to see AI’s potential for humanity, that’s why I picked a newly published paper on how to improve radiologists’ performance in breast cancer screening.

Researchers at New York University trained a deep convolutional neural network on over a million images of mammogram exams. This model achieves an AUC of 0.895 in predicting the presence of a malignant breast tumor.

A reader study was conducted with 14 radiologists to validate the model and the results show that ‘‘a hybrid model, averaging probability of malignancy predicted by a radiologist with a prediction of our neural network, is more accurate than either of the two separately’’.

If you want to know more about the code and the best-performing models, you can find them on Github.

Read Deep Neural Networks Improve Radiologists’ Performance in Breast Cancer Screening

9— Towards Robust and Verified AI

The robustness of a system has always been a concern for software engineers and some practices were established to ensure a bug-free program before deployment. Yet for machine learning systems new techniques should be adopted to avoid failures.

Testing consistency with specifications: designing and using an adversary to detect even small failures and uncover strange behaviors.
Training specification-consistent models: training models that are agnostic to the first technic (adversarial testing) to avoid overestimating their consistency.
Formal verification: limiting the output space of a model by computing and refining geometric bounds.

Read Towards Robust and Verified AI: Specification Testing, Robust Training, and Formal Verification by DeepMind

10 — The smart paintbrush

This is one of my favorites!

NVIDIA Research presented this month a new app that transforms rough sketches into masterpieces. It’s called GauGAN, it’s based on Generative Adversarial Networks (obviously) and it creates lifelike landscapes so easily.

Draw in a pond, and nearby elements like trees and rocks will appear as reflections in the water. Swap a segment label from “grass” to “snow” and the entire image changes to a winter scene, with a formerly leafy tree turning barren.

If you want to see how it works check out this video.

Read Stroke of Genius: GauGAN Turns Doodles into Stunning, Photorealistic Landscapes by ISHA SALIAN

--

--