My Top 5 Trends To Watch In AI

I researched the machine learning space for 3 months at OpenOcean, a VC focused on European software start-ups looking to raise their Series A.

Alex Obadia
Alex Obadia
12 min readJan 27, 2018

--

Artificial Intelligence (AI) is considered by many the next industrial revolution and has naturally become a trendy buzzword, from Justin Timberlake’s video at a 2028 Pan-Asian Deep Learning conference to SingularityNET’s $36M ICO raised in 60 seconds, promising to create a decentralized AI market place using blockchain technology.

Skip this if you’re already familiar with the reasons behind the current interest for AI and if you know broadly what is happening in the space. You can go straight to “I have listed …”

This is nothing new though. Artificial intelligence has been around since the 1800s with pioneers like Ada Lovelace. Some pinpoint its ‘real’ starting point in the summer of 1956 at Dartmouth College during the Summer Research Project on Artificial Intelligence. It has had many booms and busts since, called AI winters, usually because compelling demos would attract investments but wouldn’t live up to their expectations in real life.

Photo from the 1956 Dartmouth Conference with computer science superstars including Marvin Minsky, Ray Solomonoff, Claude Shannon, John McCarthy, Trenchard More, Oliver Selfridge and Nathaniel Rawchester

The current renewed interest for artificial intelligence comes from the research progresses made in machine learning and more specifically deep learning, diverging from previous efforts in logic and knowledge-based AI. This is thanks to better algorithms discovered, broad investments from governments, big companies and universities, exponential growth in computing performances and in amounts of labeled data and the open-sourcing of research (ie. arxiv, opensource datasets).

For those wondering what machine learning is, put simply it is the science of learning: “A computer program is said to learn from experience E with respect to some task T and some performance measure P, if its performance on T, as measured by P, improves with experience E.” — Tom Mitchell, 1997

Deep learning is a subset of machine learning where the algorithm used to allow the machine to get better at some task is a neural network with many (hidden) layers. (‘deep’ because of many layers). You can have deep supervised learning, deep unsupervised learning, deep reinforcement learning and hybrids of these three used together. Further below I explain the basic distinction between these subsets of machine learning.

China for instance has committed $150 billion USD to AI in its five-year plan. As a comparison, the US spent $1.2 billion USD on unclassified AI programs in 2016. This information is to be taken with a pinch of salt given a lot of the AI funding in the US is from private companies and for classified programs.

On the other side of the Pacific, Canada has become an internationally renowned research hub for AI: it has a national AI strategy, leading universities, researchers and research labs. For example, last March the Vector Institute was founded, and received $150 million from the government and Canadian businesses. Its mission is to work with academic institutions, industry, start-ups, incubators and accelerators to advance AI research and drive the application, adoption and commercialization of AI technologies across Canada.

On the corporate front, titans like Google, Baidu, Facebook, Amazon, General Motors and many more are putting a big emphasis on AI in their strategies. They’ve gone as far as declaring it as their top priority and actively training individuals to short-circuit the shortage of talent in the industry by founding their own research labs, launching higher education programs like the Google Brain Residency and partnering with online courses platforms like Udacity.

I have listed below 5 trends in AI that I believe are worth looking into, some coming sooner than others. I have kept it short and am trying to pique your curiosity with quotes and facts.

Consider each point as a conversation starter 😁

1. Unsupervised Learning

You can classify machine learning systems in 4 broad categories based on whether or not they were trained with human supervision: supervised, unsupervised, semisupervised and reinforcement learning. Most of the machine learning techniques used today are based on supervised learning and reinforcement learning, yet researchers seem to agree that unsupervised learning is the holy grail of AI.

Slide from Matt Johnson’s (QCWare) talk on Quantum Machine Learning, AI, and Autonomy, available here

One of the strongest advocate of this view is Yann Lecun, member of the modern AI Holy Trinity and current Chief AI Scientist at Facebook. Lecun uses this slide to explain the importance of unsupervised learning:

Yann Lecun’s cake slide (Keynote @ NIPS 2016 — full deck available here)

To help you grasp a little better how crucial unsupervised learning is, here is a quote from Geoffrey Hinton, second member of the modern AI Holy Trinity, now researcher at Google Brain in Montreal. He uses an intuitive analogy to compare supervised and unsupervised learning.

  • “When we’re learning to see, nobody’s telling us what the right answers are — we just look. Every so often, your mother says “that’s a dog”, but that’s very little information. You’d be lucky if you got a few bits of information — even one bit per second — that way. The brain’s visual system has 10¹⁴ neural connections. And you only live for 10⁹ seconds. So it’s no use learning one bit per second. You need more like 10⁵ bits per second. And there’s only one place you can get that much information: from the input itself.” — Geoffrey Hinton, 1996

Here is what two other experts, Yoshua Bengio and Andrew Ng, have to say about it. Bengio is the third member of the modern AI Holy Trinity and co-founder of ElementAI, Ng is the ex-Chief Scientist at Baidu and co-founder of Coursera, now leading his own fund investing in AI:

  • “Many of the senior people in Deep Learning, including myself, remain very excited about it (unsupervised learning) but none of us have any idea how to make it work yet.” — Andrew Ng in Heroes of Deep Learning, 2017
  • “We don’t even have a good definition of what’s the right objective function to measure that a system is doing a good job in unsupervised learning. It leaves open such a wide range of research possibilities! What’s exciting is the direction of research where we’re not trying to build something that’s useful, we’re just going back to basic principles about how can a computer observe the world, interact with the world, and discover how that world works. That’s cool because I don’t have to compete with Baidu, Google or Facebook. This is the kind of research that can be done in your garage.”— Yoshua Bengio in Heroes of Deep Learning, 2017

Bengio’s quote reminds us there are still some fundamental progresses to be made in AI on fronts that aren’t exclusively reserved to large corporations. This is an opportunity for bold researchers and investors to turn the industry on its head!

2. The Next Step: After Deep Learning

Although deep learning has recently been on everyone’s tongue and has many successful use-cases, from the software behind self driving cars to Netflix’s recommendation algorithm, some are now arguing that it might be a dead end to reach Artificial General Intelligence (AGI). By AGI we mean here a machine that has reached or surpassed human intelligence.

“‘Science progresses one funeral at a time.’ The future depends on some graduate student who is deeply suspicious of everything I have said.” — Geoff. Hinton, September 15, 2017

Here are some current problems with deep learning put forward by Marcus Gary, ex-director of Uber AI’s lab in his Case Against Deep Learning Hype:

  • Deep learning is data hungry
  • Deep learning thus far is shallow and has limited capacity for transfer learning (applying something learned to a slightly different situation)
  • Deep learning thus far cannot inherently distinguish causation from correlation
  • Deep learning thus far has struggled with open-ended inference

“For most problems where deep learning has enabled transformationally better solutions (vision, speech), we’ve entered diminishing returns territory in 2016–2017.” — François Chollet, author of Keras (open source neural network library), December 2017

Although most researchers agree deep learning won’t be enough in itself to get to AGI, many believe it isn’t going to be abandoned but instead built upon. Examples of projects building upon deep learning include Geoffrey Hinton’s exciting capsule networks.

Like unsupervised learning above, this is a very interesting area without any particular direction that makes sense at the moment. It is yet again an opportunity for bold researchers and investors to come in.

3. AI’s impact on society

Last September, Evercore raised NVIDIA’s share price target by 40% and pointed out that investors are “severely” undervaluing the potential market for artificial intelligence. NVIDIA is a chip company that designs graphics processing units (GPUs) for the gaming, cryptocurrency, and professional markets, as well as system on a chip units (SoCs) for the mobile computing and automotive market.

This fact is just one of the many examples of the asymmetry between society’s perception of AI’s disruptive power and the reality of it. I believe this asymmetry is yet again another opportunity for entrepreneurs and investors and that this trend is composed of sub-trends outlined below:

3.1 Job-displacement: AI is coming to disrupt a lot of industries and many individuals will lose their job. I believe it is worth seriously looking into job-displacement start-ups facilitating the transition of disrupted individuals to new jobs. For investors, you’d be aligning financial and social incentives by profiting from the asymmetry mentioned above yet helping society cope with this disruption, double win!

An example of a company focusing on this area is non-profit 80000 hours, supported by Y Combinator and Sam Altman amongst other donors. It’s worth noting that like every industrial revolution, jobs disappear while others are created. What worries analysts is that the rate at which jobs are destroyed is higher than the rate at which jobs are created.

3.2 Explainability: Progress in explainability would dampen the shock of this disruption and accelerate it. Let me explain: deep learning as it is today is a black box meaning it is nearly impossible to explain how an algorithm processes an input to give the correct output. For people to trust AI and to foster its adoption, we will have to develop technologies that will be able to explain the steps the machine took to find the solution. One industry where this is particularly important is healthcare: would you trust a machine’s medicine prescription without understanding why it’s giving it to you?

Read more about the blackbox characteristic of deep learning here (MIT Review) and how one Israeli researcher cracked it wide open here.

3.3 Infrastructural changes: AI will not only cause job disruption but also infrastructural disruption. One example of this has to do with self-driving cars and the urban landscape of the future, another with the structure of our universities.

Here are 2 quotes from previously mentioned AI pioneer Geoffrey Hinton, and Sebastian Thrun, co-founder of Udacity and ex-leader of Google’s self-driving car project and now leading an exciting automated flying car project:

  • “ Our relation to computers has changed, instead of programming them we now show them. Computer Science departments are built around the idea of programming computers, they don’t understand that this ‘showing computers’ is going to be as big as ‘programming computers’ and that half of the department should be people working on getting computers to do things by showing them, not only a few professors.” — Geoffrey Hinton in Heroes of Deep Learning, 2017
  • “ It’s not only the taxi, bus, and truck drivers that will be disrupted from the self-driving car revolution. If transportation as a service becomes a reality you only need a quarter of cars left today on the street. It’d be a bad day for many manufacturers and car-insurance companies. Going even further, the urban landscape itself will change with no more parking spots around the city needed and city streets freed of parked cars.” — Sebastian Thrun in Udacity talks, 2016

4. Solving the brain with AI

We still barely understand how our brain works. Using algorithms discovered in machine learning, we were able to find analogous “algorithms” in the brain.

Geoffrey Hinton and a collaborator discovered an algorithm in 1988 that works analogously to a mechanism in our brain discovered only 5 years later in 1993 by Henry Markram. The Recirculation algorithm and Spike-Timing-Dependent Plasticity seem to operate analogously, using presynaptic activity.

Many scientists, part of the connectionist school of thought, think progress in machine learning brings us closer to understanding the brain. Here is Geoffrey Hinton again, who by the way got into machine learning because of his interest for the brain:

  • “If it turns out that back prop is a good algorithm for doing learning then surely evolution could’ve figured out a way to implement it in the brain. Presumably there’s this huge selective pressure for it and the brain could have something quite close to back prop!” — Geoffrey Hinton in Heroes of Deep Learning
  • “In the early days, back in the 50s, leading figures like Von Neumann and Turing didn’t believe in Symbolic AI. They were far more inspired by the brain. Unfortunately, they both died very young and their voice wasn’t heard.” — Geoffrey Hinton in Heroes of Deep Learning

Researchers against the connectionist view give the example of how we were inspired by birds to design planes but ultimately strayed away from birds to design better planes.

Before working on the airplane, the Wright brothers studied how birds fly. The shape and size (and the aspect ratio) of the wings that they used for this flying prototype are similar to those of common birds.

Ultimately though, a greater insight into how our brains work could lead to a future depicted in Wait But Why’s must-read Neuralink article. Isn’t it weird that Elon Musk, known for anticipating and accelerating technological trends, has co-founded OpenAI, a not-for-profit AI research company, and Neuralink, a neurotechnology company focused on enhancing the brain, at a 7 months interval?

Food for thought!

5. Human-machine synergy and intelligence augmentation

We seem to hear a lot about AI replacing humans but what if its best purpose is to enhance us?

Many AI start-ups are actually branding their product as intelligence augmentation, especially in health-care. Yet, I feel there isn’t enough emphasis on it in the public media and not enough importance given to it on the investors side so I’m adding it as my top 5 AI trend. Here’s a famous example of human-machine synergy:

‘In 1997 Gary Kasparov lost a chess match to Deep Blue, a machine. It was an event remembered by many as a pivotal moment in the relationship between humans and computers. “To many, this was the dawn of a new era, where man would be dominated by machines.” But, argues Sankar, that’s not what happened, “Twenty years later, the greatest change to how we relate to computers was the iPad, not HAL.”

The second chess match was in 2005. In this match, humans and computers could collaborate if they liked. Who won? It wasn’t a grandmaster with a supercomputer, but a couple of amateurs with a few laptops. They were able to counter both the skill of grandmasters and the power of supercomputers by finding the right way to cooperate with the machines — to guide the computers to the right answer.’ — Shyam Sankar

Kasparov vs Deep Blue

Watch the full TED Talk and check out the movie about the match! If you find it a bit outdated, I highly recommend the AlphaGo documentary released in 2017 documenting the journey of the AlphaGo team and the 5 matches against Go world-champion Lee Sedol. There again you can see how Lee Sedol gets better by playing against AlphaGo.

BONUS

  • Cybersecurity: Nathan Benaich predicted a rise in AI cybersecurity threats in 2018 that will launch a new wave of cyber security start-ups. AI-based attacks will have unprecedented power to adapt to their enemies and use techniques from supervised and reinforcement learning to become much harder to stop. Those attacks could directly attack our infrastructure, like our power grids. How will we defend ourselves?
  • Decentralised AI: After going to London.AI #11, I believe I should mention Decentralised AI. With the rise of concerns over data ownership, and the data-hungry nature of machine learning, there is a new tension between developers and consumers: developers want the ability to create innovative products and services, while consumers want to avoid sending developers a copy of their data. Decentralized AI is a solution to this problem: a model can now be trained on data that it never has access to thanks to technologies like federated machine learning, blockchain, multi-party computation and homomorphic encryption. Parallel to this trend are blockchain projects building decentralized data marketplaces like Streamr , giving data control back to the individual. Curious to see what comes next 👀
  • AI Research: Graphcore, a leading ML hardware start-up, just posted the ‘Directions of AI Research in 2018’ , definitely check it out for expert thoughts on where AI is going.

Thanks to Thibault Févry, George Wynne and Max Mersch for reading the drafts of this article and helping me getting it in good shape!

AI newsletters: Keep up with AI news by subscribing to Nathan Benaich’s Nathan.AI , Denny Britz’s Wild Week in AI and Jack Clark’s Import AI!

--

--