The Best of AI Articles Published in December 2019

Deeper Fakes, responsible data science and Artificial General Intelligence, while listening to an AI-generated Christmas carol!

Jean-Baptiste Jézéquel
Sicara's blog
7 min readJan 9, 2020

--

What happened last December?

Quick reminder if you’re not familiar with the concept: we’re a deep tech company specialized in Computer Vision based in Paris. Every month we share our ten favorite AI-related articles (or other stuff). This is the digest of December 2019.

I know you’re probably not hungry after the Holidays but here’s the article’s menu: better and stronger deep fakes, socially and environmentally responsible data science, and Artificial General Intelligence. New decade, time to adapt, I’m kicking off with a meme instead of a comic. If you’re a data scientist under 30 years old and don’t get it, I’d love your feedback.

Sigmoid car v.s. ReLU car
Bottom one is Tesla’s new “Cybertruck” (order now if you feel like you just have too much money)

An AI-generated Christmas Song

The AI XMAS song generated with GPT-2
The AI XMAS song generated with GPT-2

I’m going to open up this Best Of AI with the song that has been stuck in my head for a few weeks. It’s a Christmas song and — brace yourself! — it’s not Mariah Carey. Interpretation is from a musician in Denmark. Lyrics are from a neural network.

Research scientist Janelle Shane trained a GPT-2 on 240 Christmas carols. Then she asked her model to produce a song about Rudolph the Red-Nosed Reindeer. The result is brilliantly terrible.

The interpretation is so pure that my family didn’t notice anything weird when I played this at Christmas dinner. Don’t hesitate to use this as background music while you keep reading!

What happened in the field in 2019?

Time for an AI-rewind
Let’s wrap up what happend in 2019

2019 is over and I reckon it’s healthy to look back at what happened this year before jumping into a new year. Former Research Director at Netflix Xavier Amatriain can help you with that. His review gives a nice perspective of this year’s achievements and challenges in Machine Learning.

StyleGAN2: the revenge of deep fakes

Fluid transition between fake faces
Smooth interpolation between StyleGAN2’s outputs (full video)

Last year, Nvidia created a huge sensation when they introduced StyleGAN (the generative algorithm behind thispersondoesnotexist.com). Some people were like “awesome! we can do that now?” and others feared that it could be used in a very malicious way.

Apparently, it was not enough, so they just made StyleGAN2. They fixed some flaws of the first version (generated images often had water droplets on the background; now they don’t). They also added some psychedelic new features like a smooth latent space, which is responsible for the animation above.

You can find the full paper here and the code there. Here is a blog post explaining all the changes.

Deploy models to production without unfair bias

Sketch of Human Bias in an ML pipeline
So basically a Machine Learning pipeline comes with a lot of human bias (credit)

Machine Learning might be fun, but we have to keep in mind that it is not a game. The code we write influences people’s lives. Our algorithms learn from the example we provide them. As such, they are prone to replicate every unfair bias present in the data. For instance, gender or race can become a factor in credit decision or resume classification (even if it’s not an explicit input).

We want our algorithms to be better than us, not to reproduce our mistakes. Following this idea, Google released Fairness Indicators this month. It’s a suite of tools built on TensorFlow to help data scientists diagnose unfair biases in their model. A good step in the right direction!

Machine Unlearning

An eraser
How does a neural network forget data?

Just like people whose data is treated by AI algorithms have the right to know that they were not affected by unfair bias, they also have the right to ask for their data to be deleted. The problem is that any model trained with their data may have it memorized, and it would be incredibly expensive to re-train all your models every time a data instance is removed.

Researchers from the University of Toronto introduce a new way to train deep neural networks so that they can unlearn more easily when a data instance is removed from the training set. A great step towards total GDPR compliance, and it seems that it can also be used when some examples simply become irrelevant. You can find their article on arXiv.

What’s the environmental impact of your model?

A lot of pollution
Training a model has as much effect as a transatlantic flight

Another responsibility as a data scientist is towards the environment. GPU computing has a big environmental impact: training state-of-the-art Deep Learning models now take up to years of computing time (of course they are distributed in many units, so we have them up-and-ready in a matter of days).

Following their recent paper on quantifying carbon emissions of Machine Learning, a team from Montreal published a website on which you can evaluate your own emissions in three clicks! The core idea is to include this information in future research papers so that we no longer ignore the environmental impact of our work.

For fun, I wanted to see what it took to train GPT-2 (the model that produced our beautiful Christmas song). One training process produces as much CO2 as a round trip between Paris and Los Angeles. You can only imagine the cost of the hyper-parameter tuning!

Solving differential equations with a neural network

A differential equation that is now easily solvable using AI
Neural nets to solve math equations

To be honest, when I first read that there was now a neural network that can solve differential equations or calculate integrals, I didn’t care a bit. I just assumed this was something conventional solvers had been doing for decades, so why bother? Turns out I was wrong and it’s really a big deal: deterministic state of the art only reached 85% accuracy on function integration, but this new solution is close to 100%, with an inference time under one second!

The research paper is here, and here is an article in the MIT Technology Review that summarizes it very well.

NeurIPS 2019 Keynote: The future of Deep Learning According to Yoshua

It’s hard to imagine the best of AI this month without mentioning the NeurIPS conference. There were a lot of interesting talks on current and future challenges of Machine Learning. I can’t mention them all, so I’ll focus on the one that impressed me the most.

Founding father of Deep Learning Yoshua Bengio talked about the System 2 Deep Learning paradigm: what current Deep Learning is missing to match human intelligence; promising leads on how to face the challenges of compositionality, causality and out-of-distribution generalization. One of these leads is, of course, meta-learning, which has been a research interest at Sicara for some time now.

If you missed it, I can but strongly advise you to rectify this as soon as you get an hour of free time. If you don’t have an hour, the 12 first minutes may suffice as an introduction to these challenges that will surely shape the future of AI!

We need a better measure of intelligence than chess and Starcraft

François CHollet and Sicara's data scientists
Creator of Keras François Chollet (4th from the left) with our team at Sicara

As a scientist, what thrills me the most in the field of Machine Learning is Artificial General Intelligence (AGI). It refers to an AI that can learn any task that a human can. And it is all that current AI isn’t, as Yoshua Bengio explained.

This article is an interview with researcher François Chollet. This is about how we measure intelligence. And why we need to find better benchmarks than video games or board games, if we want to do more than designing AI that harness millions of examples and thousands of years of computing time to learn one specific task.

ObjectNet: the proof that you’re smarter than a CNN

Left: oven gloves on a bed. Right: a hammer on a hand.
Examples of images in ObjectNet

A perfect example of the terrible generalization abilities of our Machine Learning algorithms was provided this month by MIT and IBM researchers. They spent three years designing ObjectNet. This dataset is like a parody of ImageNet, where objects are taken out of context or in odd positions, and shot at random angles. They used it to test object detectors trained on ImageNet, and — surprise! — the accuracy was cut in half. Here is the article on MIT News explaining everything.

I just love this. Seeing state-of-the-art algorithms trained for weeks on millions of images fail to recognize a hammer because it’s on a hand and not in a hand. It shows us how much progress we still have to make.

That’s it for December and subsequently for 2019. Now 2020 will be what you make it. Do you need data science services for your business? Do you want to apply for a data science job at Sicara? Feel free to contact us, we would be glad to welcome you in our Paris office.

--

--