AI in 2018 Review; What to Expect in 2019

Synced
SyncedReview
Published in
7 min readJan 7, 2019

The year 2018 saw unprecedented growth in new AI research, the emergence of numerous real-world AI implementation challenges; and an increasing number of countries following in the footsteps of the US and China by committing to ambitious national AI strategies. The AI community was also forced to deal with the thorny issue of data usage restrictions as concerns about AI safety and privacy also rippled around the globe.

To welcome 2019 with some fresh AI insights, Synced posed three questions to forty researchers and experts at last month’s NeurIPS 2018 conference in Montréal, Canada.

Q1: What was the most important/trending/promising machine learning technology of 2018?

Keywords: GAN, Deep reinforcement learning, Inference, Safety and robustness

Younes Zerouali, Research Scientist at Stradigi ai
GAN is gaining more and more attention. It was a rising trend a few years ago, but now it’s booming. GAN is occupying a large part mainly because computer vision is playing a large part in the machine learning community.

Yizhe Zhang, NLP Researcher at Microsoft Research
Using GANs on text is probably a promising direction. Others include a deep combination of GANs and other optimal transforms.

Eric (Hanning) Zhou, R&D Manager at Facebook
There are keynotes at NeurIPS that mention the use of real data. In fact, Facebook is using its user data to train its reinforcement learning model. So does YouTube. These research papers are not yet published, perhaps because real data is relatively sensitive. I believe that reinforcement learning in industrial applications will be a truly revolutionary trend.

Haoji Hu, Associate Professor at Zhejiang University
Many RL-related papers are still limited to games. The reason may be that reinforcement learning requires a large data platform to converge, which limits the scope of applications in real scenarios. How to make it work in the industry? Once there is a breakthrough in this aspect, RL-empowered applications will boom.

Eric Xing, Professor at Carnegie Mellon University/Co-Founder of Petuum
There have been no particularly impressive technological breakthroughs recently. Most are basically incremental, scaling the models or trying new applications.

Yiran Chen, Professor at Duke University
I feel like there is nothing new coming out. Everyone raised a lot of questions, and then solved them.

Hsiang Hsu, Ph.D. at Harvard University
Domain adaptation. Because we are now using a single domain to train a neural network. However, people don’t learn this way. Nowadays, more researchers are delving into multi-domain learning. For example, we train with voice and image data at the same time. There will be more and more learning like this.

The other is causal inference, because now the work of machine learning is going to approximate a function, which is far from the philosophy of AGI. You have to let the machine learn what is the cause and effect. Although there are not many papers in this conference, I believe that there will be more and more papers like this.

Chongjie Zhang, Assistant Professor at Tsinghua University
Composition. Researchers are paying more attention to how to integrate deep neural networks with symbolic reasoning to enhance interpretability and generalization.

On the other hand, applications have also made great progress. From a technical point of view, multi-agent collaboration will become more and more important.

Yisong Yue, Assistant Professor at California Institute of Technology
In terms of research activity the work on things like safety and robustness is very exciting. In the last few years we see an increase in activity and people trying to understand how to do fairness, safety, and various sorts of other considerations.

Q2: What frontier research area/direction/topic is worth exploring in 2019 and beyond?

Keywords: Composition, NLP, Causal Reasoning, Unsupervised Learning

Eric (Hanning) Zhou: One direction mentioned in the time award at the NeurIPS this year: Compository. Compository means that the task the machine learns is relatively basic, just like the alphabet. However, by combining the alphabet with another basic task — let’s say phoning — a machine can learn a high-level task, for example speech recognition. Originally the machine only learned alphabet and phoning, but now it has speech recognition skills through the composition of these two simple tasks. Instead of using the end-to-end method to train a speech recognition, I think this is something closer to human intelligence.

Yizhe Zhang: There are still many unresolved issues in natural language processing, and many are still relatively primitive. NLP is more like an autoregressive process right now. If there are some ways to transfer the external reward to the autoregressive process through reinforcement learning, it could be something worth looking into in the future.

Eric Xing: I hope everyone can focus on topics that are more relevant to actual applications and scenarios. For example, let’s downplay the assumptions about data or quality requirements for the data first and see whether the algorithm and model can still work well. Algorithms should have strong adaptability or versatility.

Xiaojie Wang, Professor at Beijing University of Posts and Telecommunications
How to determine the dimension of wording based on different data and tasks? Determining the structural parameters of a network according to specific tasks is valuable.

Nan Jiang, Assistant Professor at University of Illinois at Urbana-Champaign
Causal reasoning. Introducing the concept of traditional statistical causal reasoning into machine learning. Machine learning traditionally only finds relevance but cannot find causality. Finding causality is the key to many problems, because you can understand why something is going to happen, and it will bring more security and stability.

Yisong Yue: Things like understanding the robustness of machine learning. A lot of people are trying to understand why machine learning works so well. We have a lot of empirical evidence but the theoretical understanding is quite lacking, like when does machine learning fail — you have these examples where you modify an image of a stop sign and it still looks like a stop sign to humans, but the machine learning algorithm thinks it’s not a stop sign. Stuff like that, it’s also related to machine learning safety.

Ozan Oktem: Associate Professor at Royal Institute of Technology
Maybe some of the biological applications would be very interesting, because many of the biology problems have not been addressed nicely with mathematics so far.

Haoji Hu: The combination of deep learning and hardware. How to deploy and optimize deep learning algorithms based on specific hardware. Now companies are vying for this market.

Bert Huang, Assistant Professor at Virginia Tech Department of Computer Science
How do you train these complicated models when you don’t have enough data or when you don’t have enough labeled data is a key idea. There’s often a lot of data available, but it’s not all labeled. It’s a lot cheaper to get unlabeled data but then labeling it is expensive and that’s where you want to reduce the costs.

Yuanyuan Liu, Director of AIG
The European GDPR has been enacted. The monitoring and regulation of AI will continue regarding its fairness, interpretability, and causality.

Q3: What are the most significant challenges facing your research?

Keywords: Insufficient data, Domain expertise, Engineering, Implementation

San Lee, Co-founder & CEO of Zeroone.ai
It’s really hard to find good data, high quality data. Although the amount of data keeps increasing, our computing power has to catch up. Also you have to find new data sources, to access more data. It’s all about data.

Yuxing Ben, Staff Data Scientist at Anadarko Petroleum Corp.
The biggest bottleneck now is that there is no general database system. This process requires a lot of human resources. The resource integration needs to be improved in our industry.

Eric Xing: The elaboration and engineering of AI technology is one of the greatest demands, but few really take it seriously. For example, how to ensure your algorithm or even a component of your algorithm can be reused in different scenarios with stability, maintenance-free, and no need to debug. In an industrial environment, a screw and nut can be used for a variety of things. In this regard, AI is still a fledgling.

Yisong Yue: The main challenge I’m facing currently is how do you actually bring machine learning into [traditional industries] in a way that respects the hard work they’ve done in the last 100 years. You don’t want to just bring a machine learning system into that world and then have it crash an airplane. We need to really respect what they’ve done.

Younes Zerouali: The major one in our field is to translate actual problems into machine learning problems. This interface between industrial or real problems to machine learning problems is really young and has to be cultivated — I think more than the algorithms themselves.

Yiran Chen: My biggest concern is that everyone is rushing to publish papers first. Basically, I have an idea, write it up, and call “dibs.” Then a researcher who really works hard on this topic will find that the idea has been used, even if their result is better. Everyone is pursuing a short-term result, and the speed seems to be the only thing that matters.

Peng Peng, Research Scientist at Inspir.ai
Transfer learning. The application needs to be tailored for a specific scenario. Now there is no algorithm that can be directly applied to real business scenarios without manual intervention.

Yuxi Li, Founder of Attain.ai
Reinforcement learning now needs to be put to practical applications, and there are certainly many challenges. This technique may break out in 2020.

Journalist: Tony Peng | Editor: Micahel Sarazen

2018 Fortune Global 500 Public Company AI Adaptivity Report is out!
Purchase a Kindle-formatted report on Amazon.
Apply for Insight Partner Program to get a complimentary full PDF report.

Follow us on Twitter @Synced_Global for daily AI news!

We know you don’t want to miss any stories. Subscribe to our popular Synced Global AI Weekly to get weekly AI updates.

--

--

Synced
SyncedReview

AI Technology & Industry Review — syncedreview.com | Newsletter: http://bit.ly/2IYL6Y2 | Share My Research http://bit.ly/2TrUPMI | Twitter: @Synced_Global