Personal Views on the Future of Artificial Intelligence

Arnaud Sahuguet
Machine Intelligence Report
5 min readJan 31, 2016

On January 11th, I attended the first and public day of the NYU Symposium on the Future of AI, hosted by our colleagues and neighbors at NYU. Here are a few notes and thoughts about the event.

I could not find any website describing the event. Here is a scan of the program of the first day.

First day program for the NYU Symposium on the Future of AI.

Keep in mind that I am not an AI person. My PhD is in databases. Take everything I say with a grain a salt. Also, the field is moving very fast: since January 11th, the Google Deep Mind team beat the EU Go champion using its AlphaGo software.

Industry vs Academia

Key industry players were present at the conference, including Google, Facebook and Microsoft. The overall message from industry can be summarized as:

  1. « AI that benefits the many, not the few » (Eric Schmidt)
  2. AI research that is done collaboratively and in the open with publications, code, hardware and datasets
  3. a set of best practices to move beyond bespoke and one-of solutions

Industry repeatedly acknowledged the fact that the next good idea is more likely to come from academia. This might be true in theory. In practice, the most successful AI techniques require an enormous amount of data, data that academia usually does not have access to.

Research Directions & Challenges

At the technical level, the research directions mentioned were around (a) integration with reasoning, attention, planning, memory; (b) combination of supervised, unsupervised and reinforcement learning; and (c) effective unsupervised learning.

AI Cheat sheet

Supervised learning is the machine learning task of inferring a function from labeled training data.

Unsupervised learning is the machine learning task of inferring a function from unlabeled data.

« Unsupervised learning is the “dark matter of AI” », Yann LeCun.

At the meta-level, challenges included (1) provably safe AIs (Microsoft), (2) AIs that can interact with humans – aka AIs like us – and also (3) better platforms that can bring AI to the many (Google).

If like me, you have not been following the field very carefully, see Richard Mallah's overview of 2015 top AI achievements [5].

The un-killer app …

Self-driving cars have been a boon for the AI community. Pun intended, this is really the killer (or rather un-killer) app and it keeps pushing the envelope for the entire field. First, driving is a great test case for AI challenges such as perception, learning and planning. Second, the auto industry produces cars en masse with a good track record of quality and reliability. Third, transportation is a very big industry with a clear impact on humans and their environment. When you combine these three elements, you get a great recipe for real innovation, i.e. the « successful creation and delivery of a new or improved product or service in the marketplace » as defined in [1].

Having realized that, the interesting question you should ask yourself – if you are not doing AI – is:

« What’s the “self-driving car” for my field of research? »

AI virtuous cycles

Advances in hardware like GPU had a strong impact on AI algorithms especially for deep learning, making them orders of magnitude faster (60x according to NVIDA). The R&D for these GPUs was mostly funded by the video game industry. It is ironic to see AI being applied to learning how to play video games, as demonstrated by Deep Mind [2,3] for Atari games. AI is also starting to disrupt other games like poker [4].

The Deep Mind team motivated their research interest for games by the fact this is a domain where you can easily gets lots of data.

Advances in neuroscience also had a strong impact on AI algorithms. Neural networks take their inspiration from the way our brain works. An interesting application of AI mentioned during the conference was « AI assisted science » where AI can be used to help and guide scientists doing their work. Think medicine, physics, astronomy, etc.

So, it seems that we have two AI virtuous cycles:

AI virtuous cycle (1)
AI virtuous cycle (2)

Quis custodiet?

With AI playing an increasingly important role in our lives — driving our cars, scheduling our lives, etc. — , making it do the right thing becomes critical. Science fiction is a good inspiration for horror stories or predictions in this space, e.g. SkyNet, Iron Man’s Jarvis, Spike Jonze’s Her, or Asimov’s Three Laws of Robotics.

Eric Horvitz from Microsoft Research mentioned some ethical and safety related requirements, like being able to

  • verify the correct behavior of an AI system, aka « provably safe AI »
  • understand when an AI system is getting confused
  • have the AI system ask for clarification or use human as a fallback

Note that for self-driving cars, fallback cases (aka disengagements) must be reported to the US Department of Transportation.

Assuming the AI system does faithfully what it is supposed to do, it is not always clear what this “it” should be. My guess is that something like the trolley problem in the context of self-driving cars will keep AI experts, entrepreneurs, ethicists, lawyers and policy makers agitated for some time.

Replace Trolley+Man with a self-driving trolley; [source]

Overall

Thank you to NYU for organizing this great event, with key players from the field, both from industry and academia.

I am not familiar enough with the field of AI but it felt to me more like “The Future of Deep Learning” than “The Future of AI”. Also, the choice of speakers and panelists did not scream “diversity” to me, with only one woman present. I am sure we want the future of AI to be more diverse.

My 3 personal take aways from the day are:

  1. What’s the “self-driving car” for my field of research?
  2. How do you build provably safe AI?
  3. AI-assisted science

To best summarize my current and personal point of view about the future of AI, I will borrow a quote from @KJ_Hammond at Next:Economy conference:

« An AI system who can explain is a partner; an AI system who can’t explain is a bully. » Kristian Hammond, Nov 2015.

And nobody wants to be any AI's bully.

Acknowledgements

Special thanks to Nikolai, Gideon, David and Hanna for comments on early versions of this post.

References

[1] Innovation: The Five Disciplines for Creating What Customers Want, Curtis R. Carlson and William W. Wilmot, 2006.

[2] Playing Atari with Deep Reinforcement Learning, Mnih et al., 2013.

[3] Google DeepMind’s Deep Q-learning playing Atari Breakout, 2015.

[4] Poker-CNN: A Pattern Learning Strategy for Making Draws and Bets in Poker Games, Yakovenko et al., 2015.

[5] The top A.I. breakthroughs of 2015, Richard Mallah, Dec 2015.

[6] Asimov’s Three Laws of Robotics, 1942.

--

--

Arnaud Sahuguet
Machine Intelligence Report

@sahuguet, SVP Product at Gro Intelligence, previous life includes Cornell Tech, NYU GovLab, Google, Bell Labs, UPenn, X91.