What’s Hot In Your Field

From Apple Special Event. September 12, 2017.

Finding an interesting problem to solve is quite important in research. It is also important that you know what other researchers find interesting. Mainly to understand where your field is moving. But there are several other practical reasons why it is important to know what’s hot in your field. For example you will have more opportunity to collaborate with other researchers, more datasets and code will be released and more challenges and workshops will be organised and more funding will be available for hotter problems.

Although arXiv has become a venue itself for Computer Science, the signal to noise ratio is still quite low. So IMHO, conferences are the best places to look for quality work. But with hundreds of papers getting accepted, it is not very easy to go over the whole paper list for even one conference. Considering there might be more than one conference you follow, even a simple task such as browsing the interesting papers might take plenty of time. So I’ve decided to write a script to do that for me.

I have been using it for some time and decided to share it. It’s not perfect, or there is nothing fancy going on, but it does the job for me. Any way here’s the code.

Now let’s see how it works. NIPS 2017 accepted papers are recently announced. Here’s the output for NIPS 2017:

There are 680 accepted papers
--------------------------------------------------
neural networks 34
reinforcement learning 24
deep learning 12
variational inference 11
deep neural networks 11
deep neural 11
generative adversarial 10
gaussian processes 10
gradient descent 9
neural network 9
large scale 8
deep reinforcement 7
graphical models 7
deep reinforcement learning 7
coordinate descent 7
online learning 7
recurrent neural 7
adversarial networks 6
semi supervised 6
multi agent 6
There are 7 interesting papers
--------------------------------------------------
multimodal learning and reasoning for visual question answering
avoiding discrimination through causal reasoning
question asking as program generation
what-if reasoning using counterfactual gaussian processes
differentiable learning of logical rules for knowledge base reasoning
high-order attention models for visual question answering
a simple neural network module for relational reasoning

There are a lot of neural networks, right :D That’s one way to look at it, but variational inference and gaussian processes are getting more popular, I guess.

This time let’s look at what EMNLP 2017 has to say. Here’s the output:

There are 344 accepted papers
--------------------------------------------------
machine translation 29
neural machine 23
neural machine translation 23
neural networks 11
reinforcement learning 9
relation extraction 9
word embeddings 8
semantic parsing 7
based neural 7
question answering 7
sentiment analysis 6
fine grained 6
dependency parsing 5
cross lingual 5
attention based 5
natural language 5
entity recognition 4
language models 4
sequence models 4
sense disambiguation 4
There are 25 interesting papers
--------------------------------------------------
machine translation, it's a question of style, innit? the case of english tag questions
identifying where to focus in reading comprehension for neural question generation
what is it? disambiguating the different readings of the pronoun ‘it’
learning what to read: focused machine reading
recovering question answering errors via query revision
an end-to-end deep framework for answer triggering with a novel group-level objective
story comprehension for predicting what happens next
learning to paraphrase for question answering
question generation for question answering
an analysis of eye-movements during reading for the detection of mild cognitive impairment
two-stage synthesis networks for transfer learning in machine comprehension
a question answering approach for emotion cause extraction
accurate supervised and semi-supervised machine reading for long documents
adversarial examples for evaluating reading comprehension systems
race: large-scale reading comprehension dataset from examinations
reasoning with heterogeneous knowledge for commonsense machine comprehension
structural embedding of syntactic trees for machine comprehension
world knowledge for reading comprehension: rare entity prediction with hierarchical lstms using external descriptions
the promise of premise: harnessing question premises in visual question answering
temporal information extraction for question answering using syntactic dependencies in an lstm-based architecture
latent space embedding for retrieval in question-answer archives
deeppath: a reinforcement learning method for knowledge graph reasoning
document-level multi-aspect sentiment classification as machine comprehension
asking too much? the rhetorical role of questions in political discourse
quint: interpretable question answering over knowledge bases

This time a lot of machine translation, right :)

Topics to watch are relation extraction, semantic parsing, question answering and cross lingual.

You might notice that there are 344 papers accepted, my n-gram match script filtered the whole list for me down to 25, which is somewhat more manageable. Also EMNLP people seems to prefer longer titles :D

Finally, let’s look at what CVPR 2017 has to say:

There are 784 accepted papers
--------------------------------------------------
neural networks 31
convolutional neural 24
weakly supervised 19
action recognition 17
semantic segmentation 16
pose estimation 16
convolutional neural networks 15
object detection 15
neural network 15
deep learning 14
zero shot 13
person identification 12
deep neural 11
spatio temporal 11
question answering 10
representation learning 10
convolutional networks 9
shot learning 9
optical flow 9
image classification 9
There are 18 interesting papers
--------------------------------------------------
tgif-qa: toward spatio-temporal reasoning in visual question answering
clevr: a diagnostic dataset for compositional language and elementary visual reasoning
dual attention networks for multimodal reasoning and matching
comprehension-guided referring expressions
graph-structured representations for visual question answering
end-to-end concept word detection for video captioning, retrieval, and question answering
lip reading sentences in the wild
mining object parts from cnns via active question-answering
the vqa-machine: learning how to use existing vision algorithms to answer new questions
multi-level attention networks for visual question answering
the surfacing of multiview 3d drawings via lofting and occlusion reasoning
are you smarter than a sixth grader? textbook question answering for multimodal machine comprehension
creativity: generating diverse questions using variational autoencoders
knowledge acquisition for visual question answering via iterative querying
making the v in vqa matter: elevating the role of image understanding in visual question answering
what's in a question: using visual questions as a form of supervision
an empirical evaluation of visual question answering for novel objects
a dataset and exploration of models for understanding video data through fill-in-the-blank question-answering

This time, as one might expect we have a lot of convolutional. But we also have plenty of weakly supervised, pose estimation, zero shot, person identification and finally question answering.

Conclusion

I thought I could do a word embedding based retrieval instead of simple a word match to find the relevant papers but then I wanted to keep it simple and didn’t want to postpone this post any more.

You can find the code and the sample files here. Hope it helps.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.