Deep Learning … and beyond — RRRs from the Deep Learning Summit 2017 in London
I am working with Nitro for a year now and have learned a lot about Machine Learning (ML) and Natural Language Processing (NLP), but as always: the more you learn, the more you understand that you know nothing.
Triggered by a couple of events (like DeepMinds AlphaGo winning against Lee Se-dol) and Elon Musk talking about the risk of Artificial Intelligence (AI) there has been a lot of interest in where we are with AI and ML and where we are going.
Without a doubt we are heading towards a cross-roads. What is going to happen in the next ~10 years will most likely change the world. Forever!
And there are risks, but also plenty of opportunities. In some cases the risks create fear (e.g. the machines are going to take over the world). In other cases we think and talk about AI and ML as a panacea to ensure the survival of our species. I think the truth is somewhere in the middle. If you want to get your head around it, I recommend to read the article series on Machine Learning and Artificial Intelligence (AI) on Wait-But-Why and especially the classic piece from Vernor Vinge on the Singluarity. One remarkable quote/predition in this article is
The work that is truly productive is the domain of a steadily smaller and more elite fraction of humanity. In the coming of the Singularity, we are seeing the predictions of _true_ technological unemployment finally come true.. I think it is espially useful to understand the difference between Machine Learning, Artifical Narrow Intelligence (ANI) and Artifical General Intelligence (AGI).
For me the main conclusion/take-a-way from the conference is that the first wave of ML/DL based intelligence will hit us “soon”. It is not a question of if, but when (maybe it has already arrived :)). With that we can expect that with enough training data and enough time and enough CPU/GPU power and a narrow enough domain ML/DL-models will be able to do “everything” better than a human (play games, drive cars, diagnose illnesses, run the suicide hotline, …). With that we can declare the problem of ANR to be solved.
Now we can start to look ahead and start to think about what comes next. Building models with a lot of (labeled) training data is a solved problem. For me the next better, bigger challenge is to make models learn fast(er) with no data or very little, incomplete, bad data. How come that a 3 year old child can learn how to play 2–3 games on an iPad and comparable efforts to train ML/DL models (for a day) give us mediocre results? Means for me the question is going to move from can we build a model that has better judgement than a human being (and more and more the answer to that question is going to be yes) to how can we make sure the models learn faster than a human being and how can we make sure that we end up with models that can not only play one game, but all games (solve the problem of transfer learning).
I think there is a lot of interesting work happening in the space of Generative Adversarial Networks (GANs). For fun take a look at this …
With GANs you can start to get insight into the structure of your data, without labeling the data and you can then use that insight to configure your DLs without learning from labeled data.
Last but not least, one of the presentations was using Generative Models to generate source code and I know that this is a long-shot, but what if … this is the beginning of evolution, where we have models creating new models. Most of them will not be “better” than the model that created them, but occasionally a “better” model will “evolve”. Interesting!
The presentations will hopefully get uploaded in the next couple of weeks and I recommend to take a look at them.
In any case I think/feel that we have barely scratched the surface. What is going to happen in the next 10 years is going to be truely remarkable and I choose to believe that with the right kind of reflection and awareness we can use it to build a good future for ourselves and the planet.
Roland's Random Ramblings :)
Originally published at www.tritsch.org on September 23, 2016.