374 Followers
·
Follow

Top 10 content on data & machine learning of March 2018

1. Article: “What worries me about AI” — Francois Chollet.

One of the dominant topics of March was AI safety and the fallout of Cambridge Analytica’s sneaky use of Facebook Graph. The irony didn’t escape me that this is a Google employee that’s bitching hard about the industries’ (and Facebook in particular) irresponsible behavior to data safety. It’s a good read regardless, although it doesn’t provide a good answer to the obvious question what is Google is doing about it.

On the same topic, Yonatan Zunger (Ex-Google now Humu) wrote this short piece that is also worth reading.
(If you are interested in the topic you might want to read Norbert Wiener’s — Cybenetics )

2. Article: “AI weeds: what they are, how they could choke off the internet” — Joelle Jenny, World Economic Forum.

This piece captures very well how we need to be worried about preventing unintentional consequences just as much as we worry about malicious attacks with AI. The power consumption alone will make the concerns around Bitcoin’s power consumption look pale when machine learning really takes off. Small AI programs gone rogue are a real threat if systems are not designed with safety measures in mind.

3. Podcast: Adversarial Attacks Against Reinforcement Learning Agents with Ian Goodfellow & Sandy Huang, This Week in Machine Learning

Goodfellow is the brainfather of the GAN. And he explains together with Huang more on their research on how easy it is to fool neural networks.

4. Podcast: Navigating AI Safety — From Malicious Use to Accidents — Future of Life Podcast.

This podcasts explores all the above topics, AI safety, The Cambridge Analytica scandal and Goodfellow’s research. But it also further explores new potential research topics and how to tackle these problems.

5. Podcast: O’Reilly Data Show — Unleashing the potential of Reinforcement Learning with Danny Lange (Unity and former Uber).

Unity provides a great platform for machine learning enthusiasts, especially for simulation. Personally though I was really interested in his analysis on why feature selection and exploration is so much more important than just the raw algorithm.

6. Article: Understanding deep learning through neuron deletion — Deepmind.

There is an interesting balance between letting an AI (or neural network in this case) run it’s course, or making sure that it’s every move is understood. Deepmind’s biggest recent success was it’s AlphaGoZero neural network that beat the AI Go player in only a couple of days without any human input. In other words, human understanding and input can slow down progress. At the same time, to improve algorithms it’s beneficial if you at least understand what it did. For further exploration of neural networks Deepmind is drawing inspiration from experimentation in neuroscience. Deepmind describes in this blog how they went deep in trying to understand the inner workings of their neural networks better.

7. Article: Flippy the Burger-Making Robot was Fired After Just One Day at Work — Jessica Miley.

The inflated hype around will make give us many misfitted products and services in the years to come. This article is a great example of what can go wrong if you just want to chase the latest hype without deep understanding of what you are trying to improve.

8. Article:“12 Breakthroughs That Shaped today’s Artificial Intelligence” — Stephen Moyers

Its good to get some perspective of the trajectory of AI over the past history. It shows that progress isn’t linear.

9. Article: The Productivity Gain: Where Is It Coming From And Where Is It Going To? — Rodney Brooks.

Technically this article is from February. But it’s great exploration of how you can increase productivity with AI (and automation in general).

10. Article: Reflections on Innateness in Machine Learning — Thomas G. Dietterich

Gary Marcus has been critical on the hype around Deep Learning and wrote “A Critical Appraisal” It’s a good reminder that we don’t have a clue yet how to overcome many hurdles that need to be taken towards AGI. While parts of the discussion between Dietterich and Marcus (and many others) are sometimes a bit too technical or purely on semantical details. I do think this debate is largely fascinating and language really matters. This blog by Dietterich is really articulate and pushes the debate forward.

Some extra’s:

Paul Allen Wants to Teach Machines Common Sense — The New York Times
Principles of computation in neural networks, real and artificial — Medium
Why Humans Learn Faster than AI for Now — Medium

Written by

Product @ Quin

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store