Since Andrew Ng’s letter to me was typed and not handwritten, I’ve decided to publish my thoughts on the AI Index here instead of being included in the Expert Forum.
For my summary of key/favorite points see here, and to view the report itself go here. I’ll reiterate that it only takes 15–30 minutes to skim through the bulk of the report, and I’d highly encourage you do so!
In April 2017, a small group of software engineers that met each other taking computer science classes together decided to take the plunge into deep learning. That group eventually became Paper Club, and since then we’ve gone through parts 1 and 2 of Fast AI, entered various Kaggle competitions, and tried to read one AI-related paper a week.
But seven months is barely enough to scratch the surface! I’m still quite fresh in this domain. More than anything, I’m excited — to continue my own path through this material, and to witness how AI will change the world over the coming years. The content of the AI Index raised and reassured this feeling quite a bit.
With that out of the way, here’s a grab bag of my thoughts on AI index 2017.
This piece of the report echoes what has been a recurring theme in Paper Club: theory vs. practice. How much time should we dedicate to reading papers, vs. entering Kaggle competitions? Doing linear algebra exercises vs. training models for a side project? Should we read old fundamental papers or new cutting-edge papers?
Since most of our members’ respective programming backgrounds are in bootcamps and/or self-teaching, we have experience with the bad habits that can form fumbling high-level libraries without having a clue what’s going on underneath the hood. However, we are also taking our own free time outside of work to learn this stuff, and would quickly lose interest if we were just going through worksheets of math problems.
This graph demonstrates a natural progression of ideas and interest in AI this century; starting with researchers, trickling down to students at universities, and now making its way into startups and industry.
My interpretation of this data is that now is a good time to lean towards practical application of the hard-won ideas produced by universities and big companies over the past few years. The growth trajectory for business interest has caught up (and may soon surpass) the interest in research/education as these powerful concepts are aimed towards solving real-world problems.
Encouraging Trends in Technical Performance
Although humans are still better at more “difficult” tasks like document/visual question answering, they are not improving at nearly the rate of neural nets.
For example, in the graph above plotting document question answering accuracy, we can see that just in the last two years the state-of-the-art AI has improved it’s accuracy by almost a third, going from 60% to almost 80%. It’s not hard to imagine these sections of the report singing a different tune in the next iteration or two of the AI Index.
Oh, the Irony
I find it somewhat ironic that a big recurring theme is difficulty finding relevant, trustworthy data. That just so happens to be the biggest problem facing the domain that the report is reporting on!
With a bit of extrapolation, the AI index was able to produce some trends around university enrollment in AI-related classes (pictured above). While this is a good start, it represents a very narrow demographic of potential deep learning lifers. I would love to see more thoughts and data around the accessibility of the industry to newcomers.
This is especially notable to Paper Club. When we first surveyed the landscape, all we saw was a ton of math and required computing power. This was quite intimidating (even to the guy with a math major!). One of the reasons we started with Jeremy Howard’s fast.ai course (which is excellent, btw) is because of its promise to demystify deep learning. We couldn’t be happier with that choice, as it gave us the confidence and tools to keep going and get into more complex material.
Improving on and developing similar MOOCs or readily available books will widen reachable audience by a ton. Continued focus on the quality of open source tools will prop up the feeling that my single GPU and I can take on the world.
Mainstream Media Coverage
Current media coverage of AI seems to be quite positive based on the data presented in the report. Given the built-in doomsday scenario with AI, how long will this last before naysayers get their chance? My experience has been that the media bounces back like a rubber band when its sentiment on a topic gets stretched too far in one direction. The general public can be wary of new technological developments, especially ones without clear explanations (which I would say currently describes neural networks). I hope new articles continue to trend positive, but would not be surprised if things took a turn for the worse.
Diversity and Inclusion
Diversity and inclusion in AI has a much greater impact than in almost any other industry because of the potential catastrophic consequences of machine bias in AI algorithms that are developed in a vacuum with a hivemind. This makes it even more worrying that we’re unable to surface reliable diversity and inclusion data in the industry — I can’t offer much right now, but I think it should be a priority and I sure hope this changes in the near future!
The Milestones timeline was a great visualization of the progress in AI, as AI has been able to beat humans at exponentially harder tasks in sub-linear time over the past few years. I would also like some kind of insight into what the next barriers to fall will be, and how close we are to breaking them.
One area of interest that I would like to see included in future reports is progress towards general AI (i.e. an AI that can “think”, reason, and approach new tasks and problems like a human does).
Like it or not, achieving general AI is a breakthrough that would instantly change the word. Expert opinions differ vastly on timelines and viability for general AI, and that’s okay — I want to read about all sides of this discussion! If there’s no substantive progress towards general AI, I want to hear about that as well.
AI Index is part of the 100 Years Study on AI at Stanford, and as time goes on I think they’ll have to turn their ears more and more towards general AI.
I’m very happy that the AI Index is a thing. My conclusion here is the same as it was for my summary: this is an exciting beginning to a fantastic project. I can’t wait for AI Index 2018, and even more so AI Index 2038.
P.S. There’s a weird downwards dip in the graph of News Translation BLEU scores over time on page 29. If anyone know what’s up with that I’m curious.