Retune 2016, Part 1: The Dawn of Deep Learning
This is an extract from my talk (ie Sermon, it was in a Church) at Retune 2016 . I’ve split it into multiple posts, based on theme. Part 1 is a summary of ideas which I explore in more detail in my Resonate 2016 talk, the rest is stuff I’ve been thinking about for the past few years, but first time presenting.
- Part 1: The Dawn of Deep Learning
- Part 2: Algorithmic Decision Making, Machine Bias, Creativity and Diversity
- Part 3: Deep Learning as a journey through multiple dimensions and transformations in SPACE and TIME (coming soon)
- Part 4: AI, the artist (coming soon)
Artificial Intelligence (AI) is so hot right now. It’s what everyone is talking about. Cos it’s going to change the world. And this is what it looks like. It’s very blue, and shiny.
In fact here is a trend graph of the term ‘Artificial Intelligence’ in the news. It starts out relatively low, and spikes in 2012, which I think is related to a Google Brain research project which was big on mainstream news. The researchers ran an unsupervised learning algorithm on frames from 10 million YouTube videos, …
…and just by looking at those YouTube videos, the parameters in this randomly initialised neural network adjusted themselves to learn these representations of human and cat faces. True story.
But then it’s been pretty dormant until a big explosion in 2015. What happened?
Now most of the current AI explosion is related to a thing called ‘Deep Learning’ (DL), which is a form of Machine Learning (ML), which is a sub-field of AI. I’m going to talk about Machine Learning & Deep Learning in more detail later on, but for now I can say they’re basically algorithms that learn from data. As in, you train what’s known as a ‘model’, on some example ‘training data’, and then you feed the model new data, and it makes decisions, or predictions based on what it’s learnt from the training data.
The AI history that’s currently being written, is that the these deep learning algorithms have been around for decades, since at least the 80s..
…even earlier in fact, some are keen to point out.
They’ll say that only recently, with the emergence of powerful parallelized hardware such as GPUs (Graphical Processing Unit), have we been able to run these deep learning algorithms properly. And only with massive crowd-sourced datasets have we been able to put them to good use with practical applications solving real world problems. That’s why we’re having a massive AI revival they’ll say.
This is a slide from a well known lecture by Yann LeCun called “The Unreasonable Effectiveness of Deep Learning”, saying exactly that. He’s one of the godfathers of Deep Learning and he knows what he’s talking about. And it is true.
But there’s another angle to this.
This is the trend graph for the term ‘Big Data’. Absolutely nothing until about 2011, and then slowly it starts rising. Now it should come as no surprise, that after a steady period of ‘Big Data’, we have an explosion of ‘Artificial Intelligence’.
(btw, yes I’m going to use tweets as anchors throughout my talk).
First, as a slight provocation, I like the metaphor relating the development of artificial intelligence as a means of coping with big data, analogous to the Darwinian evolution of ‘intelligent’ complex organisms, and eventually consciousness. As simple organisms evolved, some started developing more complex sensorimotor systems. They started acquiring more complex senses and related behaviours, to more optimally react to their environment, evade predators, and find food or mates. They perhaps started needing a higher level of ‘intelligence’ to manage the higher dimensional data streaming in, so they could make better use of the limited bandwidth in their neural pathways, to make more optimal decisions to feed to the relevant parts of their body, and to take optimal actions. This involves many things, including being very efficient with which sensory input data to process, and elevate to higher levels of cognition, and which sensory input data to suppress, and effectively ignore. In higher organisms still, this may even include learning to model the environment, so as to make more accurate predictions, and thus be more efficient in processing sensory input data. Going even further, to be able to form any kind of social interaction, they may need to learn to model each other. So I can attempt to model and interact with any of you, not as billions of atoms vibrating in a quantum field, not as a huge lump of organic cells moving through space; but as a thinking, feeling individual, as an abstracted high level entity with goals and desires that I can empathise with.
So intelligence, and big data, go hand in hand. I don’t have more time to go into this today, but I wrote a long post about it if you’re interested. You can find it if you search for this title (here) — though I’ll probably write an update for it soon.
But there’s another, more tangible reason why AI is exploding now, after years of big data.
One of the reasons is …
and data is the new currency. But actually it’s not the data itself which is where the true value lies. It’s what the data says that’s valuable. For many companies like Google, Facebook, Twitter and now countless start-ups, their business models depend on making sense of big data.
Because they’re collecting more data than they know what to do with.
Likewise with the NSA, GCHQ, the Five Eyes. They’re building such a monumental archive of human communications, and they don’t have a frigging clue what to do with it. They’re all drowning in data.
They need machines, to crunch through the data, find regularities, learn from it, find meaning in it, and understand it.
They need machine servants that will produce executive summaries of the data, dumbed down for puny human minds, and will then take the desired actions.
So billions and billions are being invested in solving this problem. You won’t see billions invested in AI research to end world poverty, or culturally enrich our lives. Interestingly, most of the current breakthroughs are in natural language understanding (so they can understand your emails and documents), image recognition (so they can understand your photos and videos), speech recognition (so they can understand your voice and voice calls) etc.
And even if this research is performed openly — which it mostly is, with algorithms and research outcomes shared publicly — it’s not a lot of use without data to train or predict on. Whoever has the data, is in control.
So first and foremost we needn’t be concerned about terminator-style machine overlords enslaving us. We should be concerned about corporate or state overlords — backed by machine intelligence — firming their grip on us, and widening the economic gap, while we succumb to a culture of compliance as these practises become increasingly normalised. That’s the first thing we should be concerned about.
But I don’t want to be an unfair harbinger of doom, and only focus on negatives. I think plenty of good will hopefully also come out of this AI research. I’m especially hopeful for revolutions in healthcare, and cures for terrible diseases like leukaemia or dementia — which is actually also an active area of ML research right now — though I don’t know if it’s getting as much investment as the surveillance related fields.
But I think it’s important to be realistic, and understand why we’re having the AI explosion that we’re having right now. It’s naive to think that it’s a coincidence, that super-powerful GPUs happened to be lying around (when in fact NVidia spent $2 billion in R&D just for its latest chip targeted at deep learning), or that loads of ready made, clean, labelled datasets happened to appear online ready to plug into these ‘algorithms of the 80s’ (when in fact funding for AI research has gone through the roof both in academia with PhD and post-doc programs, and also in the commercial sector with tech titans like Google, Facebook etc. expanding their AI research teams, as well as an explosion in Venture Capital funding for AI startups).
Make no mistake, the reason we’re having an AI explosion is because billions are being invested in the field, and that funding has a motivation. I think it’s fair to say…
If World War I gave us — at least accelerated the development and widespread use of — mechanical computers,
WWII gave us digital computers,
And the Cold war gave us the Internet
Then the mass surveillance related to the War on Terror and Internet business models are giving us Artificial Intelligence and Deep Learning.