Happy AI New Year! Global Researchers Reflect on 2019, Talk Trends for 2020

Synced
SyncedReview
Published in
8 min readJan 2, 2020

The year 2019 saw unprecedented growth in AI research, development and deployment. Great technical progress has been achieved in image recognition, image generation, natural language understanding and other fields; while challenges remain with data management, efficiency measurement, computational capacity and other issues.

To welcome 2020 with some fresh AI perspectives, Synced spoke with global researchers from Google Brain, Sony AI, Alibaba affiliate Ant Financial (formerly known as Alipay), Israel-based AI processor company Habana (recently acquired by Intel), Russian tech giant Yandex, Vietnam’s newly established research lab VinAI Research, French deep learning inference acceleration startup Mipsology, and China-based remote sensing data platform TerraQuanta.

From where you stand, what are some promising/trending AI technologies of 2019?

Colin Raffel, Senior Research Scientist, Google Brain
In 2019 the community made huge progress on learning from limited labels. MixMatch, UDA, S4L, and ReMixMatch produced huge gains on standard semi-supervised learning benchmarks. BERT is an obvious success story in transfer learning for NLP. A year or two ago the main criticism of big neural nets was that they required a huge labeled dataset to train. This is increasingly not a valid concern, and I think that’s a huge deal!

Hiroaki Kitano, Sony AI president and CEO & Michael Spranger, Sony AI Deputy General Manager
Deep reinforcement learning is obviously one case, and technologies like GANs. And generative models in general — things that can generate new artifacts that are very interesting — so that’s one area we are very interested in. We also do a lot of work on music, for instance generation of music.

Jun Zhou, Senior Staff Engineer, Ant Financial
To me, the most promising machine learning technology in 2019 is definitely secure and privacy preserving machine learning. There are many reasons behind this. People are becoming more aware of privacy due the several data leakage accident that happened in 2019. Also, more and more privacy protection related laws and regulations have come into force recently. However, data are isolated in nature, so the traditional centralized machine learning paradigm on plaintext data does not work anymore. Secure machine learning has drawn lots of attention in both academic and industry as a sharp tool to solve this problem.

Anna Veronika Dorogush, Head of ML systems, Yandex
Since my focus is gradient boosting, this year, there are lots of stuff in gradient boosting that I really like. We’ve published papers on, for example, improving the quality of gradient boosting using sampling, and about presenting decision trees in a different way which is more interpretable. But there’s also a lot of research and some really nice papers in other areas. Recently, the most important papers are probably connected to natural language processing — the transformers have really improved, and we are using them heavily.

Ludovic Larzul, CEO of Mipsology
I’d say the accelerator for neural networks. Talking about my field, accelerating computation, I’ve been pretty surprised by the result from Intel and Graphcore. And BERT, we’re not focusing on that, but BERT was extremely great.

Hung H. Bui, Director of VinAI Research
Perhaps unsupervised learning; and for NLP the progress is that the latest language models can help to improve performance of some past tasks, in contextual embedding for example. So a lot of exciting new developments. My personal interest is in how to learn representation for decision making, for example how do you learn representation which can later be used for control tasks? I think that’s a very interesting problem.

Eitan Medina, CBO of Habana
Our focus is enabling the researchers and the companies that are actually utilizing compute resources with a new hardware that’s more efficient. So I think there’s a huge demand for better hardware. And what I’m really happy to see is how fast the research is moving. For example a year ago nobody had heard of BERT, but since then we are showing the highest performance in BERT inferencing.

Tell us a little about your current work, any pressing issues that need to be solved?

Colin Raffel
I spent most of my time this year working on Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer. One of the results in this paper involves taking a large Transformer model and training it on a giant unlabeled text corpus. I think the research community has a good understanding of the impressive scalability of these models, but I was still personally surprised by how far we were able to push performance by making things larger. The issue with this approach is that giant models can be expensive and inconvenient to use.

I think in 2019 we explored the limits of transfer learning, and in 2020 we will explore how efficient we can make these models. We’ve already started to see some movement in this direction, with methods for distilling BERT (e.g. DistilBERT or distilling into a single-layer RNN) as well as initiatives like the SustaiNLP challenge.

Jun Zhou
I am currently working on a privacy preserving machine learning project in Ant Financial named “Shared Machine Learning.” We aim to provide tools for multi-parties to collaboratively build machine learning models and meanwhile keep their data secure. We do this by using whatever techniques in hand, e.g. secret sharing, garbled circuit, oblivious transfer, and trusted execution environment. So far, Shared Machine Learning has been successfully deployed in various tasks inside and outside Ant Financial, including intelligent marketing, risk control, and intelligent lending.

The most pressing issue currently is the efficiency problem when building large scale secure machine learning models. As I mentioned earlier, the real-world applications have more constraints such as bad network status, which makes it difficult to directly apply existing state-of-the-art approaches when facing large scale datasets. How to educate people and make them sure that privacy preserving machine learning can indeed protect their data privacy is another open issue.

Anna Veronika Dorogush
We are a machine learning department of a really large company so we are touching on all parts of machine learning — trying to improve the quality of the algorithm to achieve better accuracy, and trying to improve the interpretability of the algorithm because people want to understand what’s happening there. For example for personal assistance, in addition to speech recognition and speech synthesis, we’re also working on things like a chit-chat engine, which is a very new area with very few publications that can help provide solutions. And that’s something we must do internally, so we have a team working on that.

Chi Wang, CEO of TerraQuanta
Particularly, convolutional LSTM really caught our attention. The paper that proposed convolutional LSTM was published a few years ago, and at that time its major application was to predict the rainfall intensity. But we made some modifications to make it useful with classification purposes, and it’s proven a great application for us in dealing with spatial data.

But there are two major problems facing our own area, and I’m assuming that these are pretty common challenges as well. One is the incompatibility between research and application — solid research is not a guarantee for successful applications. Another is with the datasets — very few of them can be directly applied to train real-world models.

Hung H. Bui
I’ve been working on multiple different aspects of machine learning, probabilistic modeling, representation learning. So the problem is, to see how you can actually learn useful representation in models from data and then try to apply that to a whole range of problems for example trying to understand human behaviors, trying to understand conversational language, etc.

What area may be worth exploring in 2020 and beyond?

Colin Raffel
The best-performing semi-supervised learning algorithms largely proceed by making up fake labels for unlabeled data and then training the model against these fake targets. This general approach is so powerful because we know how to do supervised learning really well. However, it’s unlikely that this is the most efficient way for a model to learn. An interesting way to challenge this paradigm would be to try to give models the ability to seek information. Humans don’t learn by looking at tons of examples — we learn by querying the world when we make a mistake or don’t understand something.

I think that allowing models to ask questions like “Can you explain what an apple is?” or “Can you provide more information about why this image is labeled as a cat?” might make learning substantially more efficient in the future.

Jun Zhou
Secure multi-party computation and trusted execution environment are two popular techniques to solve secure machine learning. There has been some research work on this recently. However, there is still a long way to go before it is widely deployed in large-scale applications. There are still many problems to be solved such as how to balance security-efficiency-accuracy when building privacy preserving machine learning model.

For example, in real-world applications, when several parties collaboratively build secure machine learning models, there’s always an intractable problem, for example the network bandwidth between them is usually quite limited. So how to reduce communication cost and while maintaining model accuracy is a good research question. Although there are related research works in traditional decentralized machine learning area, how to solve this problem under privacy preserving setting is still an open question.

Hung H. Bui
Computer vision I think is going to continue to be a focus into the future because of the kind of applications that you would get out of advanced computer vision research. And to me, one thing that we haven’t really understood is how does a child acquire knowledge — how does it learn with very little supervision compared to the amount of supervision that we constantly have to keep for the supervised neural network. I think it’s a really fundamental problem for generalizing models and put them in lots of applications.

Michael Spranger
I think one of the big frontiers as a researcher is creativity, so really understanding human creativity and being able to build systems that are really creative.

Chi Wang
From my own perspective, naturally I’d hope to see more research on AI applications in aerospace, on satellite images for instance. This is so far a less noticed area and we’ve probably benefitted from that, but I think it’s an emerging area that worth exploring.

Anna Veronika Dorogush
I think one of the fastest evolving areas now is still NLP, and I think we will see new things there.

Ludovic Larzul
We’re focusing on inferencing. For us, next year can be big because the new FPGA availability is ready.

Journalist: Yuan Yuan, Fangyu Cai | Editor: Michael Sarazen

We know you don’t want to miss any story. Subscribe to our popular Synced Global AI Weekly to get weekly AI updates.

Need a comprehensive review of the past, present and future of modern AI research development? Trends of AI Technology Development Report is out!

2018 Fortune Global 500 Public Company AI Adaptivity Report is out!
Purchase a Kindle-formatted report on Amazon.
Apply for Insight Partner Program to get a complimentary full PDF report.

--

--

Synced
SyncedReview

AI Technology & Industry Review — syncedreview.com | Newsletter: http://bit.ly/2IYL6Y2 | Share My Research http://bit.ly/2TrUPMI | Twitter: @Synced_Global