The Best of AI: New Articles Published This Month (October 2018)

10 data articles handpicked by the Sicara team, just for you

Antoine Ogier
Sicara's blog
8 min readNov 5, 2018

--

Welcome to the October edition of our best and favorite articles in AI that were published this month. We are a Paris-based company which does Agile data development. This month, we spotted articles about deep learning, visualization, ethics and more. We advise you to have a Python environment ready if you want to follow some tutorials :). Let’s kick off with the comic of the month:

‘YOUR CLOTHES. GIVE THEM TO ME.’ ‘Shit, uh… you are now breathing manually!’ ‘I AM ALWAYS BREATHING MANUALLY.’

1 — Asking the Right Questions with Google

What do you do when a search engine doesn’t find the answers you want? You reformulate your question.

Google is trying to emulate that with their new Active Question Answering agent. When you ask it a question, it generates lots of questions with a similar meaning. It looks up all their answers and selects the best one to return to you.

This could pave the way for great improvement with chatbots. Rather than having you rephrase what you are asking when they don’t get it, they can do it themselves!

Plus, it’s open source.

Read What’s New in Deep Learning Research: How Google Uses Reinforcement Learning to Ask All the Right Questions— from Jesus Rodriguez

2 — Teaching an AI to Learn

Usually, when we train an AI to perform some task, we have a way of telling how well it is doing, or at least to show it some examples where the task is done well.

But what about highly complex tasks, where performance is hard to assess? Sometimes even humans cannot show the AI how to do something — like managing a whole city’s transit system.

A new approach called Iterated Amplification enables an AI to break down the task into smaller sub-tasks. It then asks humans for demos of those small parts. Based on just that, it can solve the main task on its own.

This is a very early-stage technology, but since it comes from OpenAI you should definitely stay tuned in the coming months.

Read Learning Complex Goals with Iterated Amplification — from OpenAI

3 — Moral Machine

A major issue about self-driving cars is that they will eventually need to make moral choices that not even humans can decide on.

Should it avoid a crowd if it means hitting another person? Should it protect passengers over pedestrians? Young over old? Even though such situations will be extremely rare, it is clear that they will sometimes occur.

This article analyzes the results of Moral Machine, an experiment that asked millions of people to decide on such scenarios.

The answers vary a lot from country to country. For instance, people in Western countries choose to protect the young much more often than in Eastern ones.

It is essential to think about these questions now because at some point carmakers and programmers will have to make exactly this kind of decisions: once a car faces two bad outcomes, it cannot just refuse to choose.

Read Should a self-driving car kill the baby or the grandma? Depends on where you’re from— from the MIT Technology Review

We recently published an article on the ethics of AI (in French). Feel free to check it out as well!

4 — Graphs With Answers

Making graphs that help understanding a dataset is hard. When there are many variables and data points, any attempt at visualization may feel messy, incomplete and deceiving.

This post argues that there is a solution: any time you make a graph, ask yourself one question you want it to answer.

By ensuring that your graph has a clear focus, you avoid making graphs that are too general — and thus useless.

Throughout the article, the author walks you through the process of exploring a dataset, one question at a time. In just a few graphs, he makes you feel like you know the data intimately.

Read Ask the Question, Visualize the Answer — from FlowingData

5 — From Curiosity to Procrastination

In Reinforcement learning, an AI learns to interact with its environment and gets rewards when it performs well. It uses these rewards to learn how to perform a task — e.g. playing a game or finding items in a maze.

But sometimes rewards are hard to get and the AI does not know whether it is doing well until it found one. A solution to this is curiosity where the agent rewards itself for discovering new things.

This usually works wonders, but researchers tried something different this time: they put an AI with curiosity in a maze with a TV and a remote control.

Guess what happened: the AI stayed forever in front of the TV, switching channel continuously! It had learned procrastination.

This actually makes a lot of sense: a new channel is ‘something new’ for the AI, so switching endlessly is highly rewarding to its curiosity.

Maybe sometimes it is not a good idea to make AIs too human :)

Read Curiosity and Procrastination in Reinforcement Learning — from Google’s AI blog

6 — How do You Draw a Cat?

Quick, Draw! is a game that asks you to draw something so that an AI recognizes it. It produced a huge dataset of more than 50 million drawings by people all over the world.

This article explores smart ways to visualize this fun dataset. Did you know Americans draw ice cream cones with one scoop and Italians with three? Can you think of all the ways people draw yoga poses?

Using methods like t-sne and autoencoders, we can turn this disorganized mass of images into a map. With it, we can explore these patterns and much more.

I have always been quite fond of vizualisation, but it’s even better with cats!

Read Machine Learning for Visualization — from Ian Johnson

7 — A Sexist AI at Amazon

Amazon recently tried to use machine learning to select candidates for job interviews by analyzing their resumes. It did not go very well.

Basically, the algorithm looked at past recruitment data — where males are vastly over-represented — and decided that women really weren’t as likely to be hired, so it penalized their resumes.

This is a common pitfall for all kinds of recommendation systems, but recent news have shown that even large companies with lots of resources can fall for it.

The good news here is, it’s over now so you can submit your application without any worries! But it is a good reminder that machine learning systems can only be as good as their data.

Read Amazon scraps secret AI recruiting tool that showed bias against women— from Reuters

8 — Uncertainty Matters

Hitting the center on average

Sometimes it’s not enough to know that you are hitting the center on average, you need to know how close you can hit! Estimating confidence is essential to good predictions.

In January, data analyst Erik Bernhardsson decided to provide uncertainty estimates for every single plot and prediction he makes.

After that experiment, he published this guide which gives simple methods and code to estimate uncertainty. Great if you want to learn how to do this without diving into complex math!

Read The hacker’s guide to uncertainty estimates — from Erik Bernhardsson

9 — Custom Synthetic Faces

You have probably already heard about Generative Adversarial Networks (GANs) being used to generate faces or make people look older.

Here, the author goes one step further. In addition to generating faces, his system can learn to modify any single facial feature continuously.

This is a big deal. Most existing models learn to perform just one discrete transition (young to old, woman to man): both the type and the extent of the transformation are fixed. Any change to this would require a full retraining of the model and a new dataset.

Here, once the system is trained you can quickly add new modifiable features. Plus, you can apply any amount of transformation — whether you want the face to look just slightly younger or super old, you can do it.

Interested? Check out this impressive demo.

Read Generating custom photo-realistic faces using AI — from Shaobo GUAN

10 — Spotting Fake Videos

‘We are entering an era in which our enemies can make it look like anyone is saying anything at any point in time’. Former US president Barack Obama never said that. And yet here is a video showing just this.

This is just one example of what deepfakes — AI-generated fake videos — can look like. These fakes can be incredibly convincing and while some can be quite fun, they can also be used to deceive and spread false information.

In this article, researchers explain the ways they have found to detect deepfakes. From checking how characters blink to compression rates, there are still a few tells that expose deepfakes.

But the fight is an endless one. As soon as a defect in fakes is found, fakers will move to correct it. Continued research is essential in order to remain one step ahead.

Read These new tricks can outsmart deepfake videos — for now — from Wired

We hope you’ve enjoyed our list of the best new articles in AI this month. Feel free to suggest additional articles or give us feedback in the comments; we’d love to hear from you! See you next month.

Read the September edition

Read the August edition

Read the July edition

Read the June edition

Read the original article on Sicara’s blog here.

--

--