NIPS Accepted Papers Stats
The Thirty-first Annual Conference on Neural Information Processing Systems (NIPS) is being held this week in Long Beach, CA. NIPS is arguably the most prestigious AI-related academic conference. It’s also the largest in terms of attendance. The last few years have seen a dramatic spike in attendance.
As you might guess, the number of papers submitted for review has also grown. In 2016, 2,406 papers were submitted and 568 were accepted for a 24% acceptance rate. This year, 679 papers out of 3,240 submitted were accepted for a 21% acceptance rate.
NIPS has been on my radar since I started working on my Ph.D. Last week, I announced that I’m joining a new machine learning startup, Infinia ML. The main reason I decided to join forces with Lawrence Carin instead of starting solo like I did with Automated Insights was the team of machine learning experts he compiled. Lawrence is a significant player in the machine learning academic world and NIPS is a good example of that. Lawrence’s group was responsible for ten papers at NIPS this year. Getting even a couple papers accepted is a major feat, so ten is a big number. That led me to do some analysis on the people and institutions behind the 679 papers that were accepted. That’s what I’ll cover next.
NIPS doesn’t make it easy
In addition to understanding which researchers (like Lawrence) were getting papers accepted at NIPS, I was curious to find what institutions were the most prolific. Much like Andrej Karpathy did for ICML, I figured it would be a straightforward data mining task to come up with these stats especially since NIPS uses the same website as ICML.
In October, when I looked at the initial list of accepted papers (which you can find on the Wayback Machine), the format was a list like this:
VAE Learning via Stein Variational Gradient Descent
Yuchen Pu (Duke University) · zhe Gan (duke) · Ricardo Henao (Duke University) · Chunyuan Li (Duke University) · Shaobo Han (Duke University) · Lawrence Carin (Duke University)Efficient Use of Limited-Memory Resources to Accelerate Linear Learning
Celestine Dünner (IBM Research) · Thomas Parnell (IBM Research) · Martin Jaggi (EPFL)Temporal Coherency based Criteria for Predicting Video Frames using Deep Multi-stage Generative Adversarial Networks
Prateep Bhattacharjee (Indian Institute of Technology Madras) · Sukhendu Das (IIT Madras)
That’s workable. It has all the information I need, which is just the paper, list of authors, and their institution(s). It was straightforward to write a script to parse that and generate the stats I was looking for. However, this was my first introduction into the wide variety of names used for various institutions. There are ten different ways “Google” is represented and 11 different versions of “IBM,” so that required some manual scrubbing. No big deal.
The bigger problem came when I wanted to get updated stats (for this post). Gone is the nicely formatted style above for the final list of accepted papers. There are a couple places I found to get the latest list: here and here. There is only one problem: neither page includes the institutions for each author! I did some searching, but couldn’t find any other sources. I could have tried pulling the institutions out of the papers, but after looking at several, there is no standardization with how authors are formatted so that seemed like a big undertaking for what should have been a quick task.
My only other option was to hope that there hadn’t been many changes since the September release of the initial papers. I could use the institutions from the September version for the December version. This made the data parsing tasks a lot more complicated because now I needed to pair data from two separate lists and account for any differences. Also, that meant I wouldn’t have institutions for any new authors that weren’t in the September data.
Ultimately, I was looking for ballpark figures, not precise statistics, so I kept going.
Changes between September and December
The number of papers didn’t change between September and December (679 total), but there were quite a few that tweaked the paper title or changed the title completely (which made it very difficult to match up). Between the two lists, I was able to find an exact match for 560 of the 679 papers.
I was able to match 98 based a few simple rules. For example, I’d look for a single matching paper between the lists by looking at the first 20, 15, or 10 characters of the title, or a single match looking at the last 20, 15, or 10 characters of the title.
That left 21 papers that were found in the December list that didn’t have a reasonable match (looking at just titles) in the September list.
In terms of authors, I was able to find matches for 1,886 of the 2,035. I associated the remaining 149 authors with “unaffiliated,” but overall it meant I had institutions for 93% of the authors. Again, not perfect, but good enough for my purposes.
Now for the good stuff
I had a bunch of questions about who publishes to conferences like NIPS, but it mostly centered around who is the most prolific. For those that were around in the late 90s and early 2000s, it feels like published papers are similar to the patent arms race from 10–20 years ago. Fortunately, (and part of what I love about the AI renaissance) is that papers don’t infer IP rights like patents.
As I mentioned earlier, Lawrence Carin’s group at Duke published ten NIPS papers. He holds the top spot and it’s not even close. This doesn’t mean Lawrence was the primary author on ten papers, but he oversaw and contributed to all of the research that went into the ten. It shows a breadth of coverage across several topics including text analysis, image synthesis, and analysis of dynamic local field potentials in the brain, which is impressive..
Total papers:1. lawrence carin (duke university): 10
2. alexander schwing (university of illinois at urbana-champaign): 6
3. nicolas heess (deepmind): 5
3. michael jordan (university of california, berkeley): 5
3. andreas krause (eth zurich): 5
3. razvan pascanu (deepmind): 5
3. le song (georgia institute of technology): 5
8. 22 tied with 4
Often the last-author on a paper is the head of research group or oversees a variety of projects. No surprise to see Lawrence here with other notables like Bengio.
Last-author:1. lawrence carin (duke university): 7
2. david blei (columbia university): 4
2. volkan cevher (epfl): 4
2. yoshua bengio (université de montréal): 4
5. 31 tied with 3
On the flip side, the first-author listed on a paper is credited for doing a lot of the heavy lifting. There are three people that contributed three papers to NIPS as first author, which is a significant achievement.
First-author:1. arya mazumdar (university of massachusetts amherst): 3
1. eric balkanski (harvard university): 3
1. simon du (carnegie mellon university): 3
23 tied with 2
Now for the top 50 institutions that published at NIPS. I kept Google and DeepMind separate just to show how much Google is dominating.
CMU, MIT, Stanford, and Berkeley are easily the top four universities. Google, Microsoft, and IBM lead the way among the for-profit companies.
The numbers are directionally similar to ICML.
Total papers:1. google: 60 (8.8%)
2. carnegie mellon university: 48 (7.1%)
3. massachusetts institute of technology: 43 (6.3%)
4. microsoft: 40 (5.9%)
5. stanford university: 39 (5.7%)
6. university of california, berkeley: 35 (5.2%)
7. deepmind: 31 (4.6%)
8. university of oxford: 22 (3.2%)
9. university of illinois at urbana-champaign: 20 (2.9%)
10. georgia institute of technology: 18 (2.7%)
11. princeton: 17 (2.5%)
11. eth zurich: 17 (2.5%)
13. ibm: 16 (2.4%)
14. inria: 15 (2.2%)
14. harvard university: 15 (2.2%)
15. cornell university: 15 (2.2%)
17. duke university: 14 (2.1%)
17. columbia university: 14 (2.1%)
17. university of cambridge: 14 (2.1%)
17. epfl: 14 (2.1%)
21. university of michigan: 13 (1.9%)
22. university of toronto: 12 (1.8%)
22. university of southern california: 12 (1.8%)
22. tsinghua university: 12 (1.8%)
25. facebook: 11 (1.6%)
25. riken: 11 (1.6%)
27. university of washington: 10 (1.5%)
27. university of california, los angeles: 10 (1.5%)
27. university of texas at austin: 10 (1.5%)
30. new york university: 10 (1.5%)
30. university college london: 10 (1.5%)
32. université de montréal: 9 (1.3%)
32. tencent ai lab: 9 (1.3%)
34. openai: 8 (1.2%)
34. adobe: 8 (1.2%)
34. university of california, san diego: 8 (1.2%)
37. university of tokyo: 7 (1.0%)
37. university of pittsburgh: 7 (1.0%)
37. peking university: 7 (1.0%)
37. university of minnesota: 7 (1.0%)
41. university of california, davis: 6 (0.9%)
41. technion: 6 (0.9%)
41. university of pennsylvania: 6 (0.9%)
41. nanjing university: 6 (0.9%)
41. johns hopkins university: 6 (0.9%)
41. university of wisconsin-madison: 6 (0.9%)
47. australian national university: 5 (0.7%)
47. tel aviv university: 5 (0.7%)
47. ohio state university: 5 (0.7%)
57. national university of singapore: 5 (0.7%)
Next, I wanted to see how many institutions were listed first, as that’s an indication of who lead or initiated the research. This shows that while Google is listed on a lot of papers, they are contributing more than leading those papers because they show up fourth in this list.
Total first-author papers:1. carnegie mellon university: 36
2. massachusetts institute of technology: 30
3. stanford university: 25
4. google: 24
5. university of california, berkeley: 21
6. duke university: 14
7. deepmind: 14
8. eth zurich: 13
9. microsoft: 12
10. harvard university: 11
Lastly, I wanted to see how many authors each institution had. This again shows how many people at these institutions are involved in doing cutting-edge research. CMU has a big advantage there.
Total institution authors:1. carnegie mellon university: 89
2. google: 78
3. massachusetts institute of technology: 69
4. deepmind: 68
5. stanford university: 66
6. university of california, berkeley: 60
7. microsoft: 59
8. eth zurich: 31
9. university of oxford: 29
10. duke university: 28
11. princeton: 28
What I got out of this (longer than anticipated) exercise are the following:
- Google is clearly leading the way in terms of breadth of research. They are on the most papers, and if you include DeepMind, they have the most authors (by far.)
- CMU is the leading academic institution as far as publishing papers at NIPS. They have the most first author papers and more people contributing to papers out of anyone other than Google/DeepMind.
- Lawrence Carin at Duke has the most productive single group of researchers at NIPS. He’s involved in more papers than any other individual, which is no small feat.
- Duke is in the top 10 of total papers by universities, first-author papers, and total authors. At Infinia ML, we will have a steady stream of great new talent coming from Lawrence’s ML group.
A note on institution names
It strikes me as a little comical that it required such a kludgey effort to pull these numbers together for one of the premier academic conferences in the world where the latest machine learning research is being presented. With just a little bit of effort (using the September format for the final list in December), this task would have taken an hour instead of a full Saturday afternoon.
But that’s not all. Companies, universities, and research organizations might want to start thinking about standardizing how they are cited in papers too. Given the value we place on published papers these days, compiling stats like I did in this post will become increasingly common. Having a bunch of name variations out there will make it harder to create accurate stats.
Below is a small snippet how I collapsed a few of the biggest offenders. I bet you didn’t know there were 11 ways to spell “IBM.”
'google brain resident': 'google',
'google brain': 'google',
'google inc': 'google',
'google research nyc': 'google',
'google research': 'google',
'google, inc.': 'google’,
'deepmind @ google': 'deepmind',
'deepmind technologies': 'deepmind',
'google deepmind': 'deepmind’,'ibm research - china':'ibm',
'ibm research, ny':'ibm',
'ibm research, usa':'ibm',
'ibm t. j. watson research center':'ibm',
'ibm t. j. watson research':'ibm',
'ibm t.j watson research center':'ibm',
'ibm t.j. watson research center':'ibm',
'ibm t.j.watson research center':'ibm',
'ibm thomas j. watson research center':'ibm',
'ibm tj watson research center':'ibm','microsoft research cambridge':'microsoft',
'microsoft research india':'microsoft',
'microsoft research maluuba':'microsoft',
'microsoft research new england':'microsoft',
'microsoft research, redmond, w':'microsoft',
'microsoft research, redmond, wa':'microsoft',
'miicrosoft research':'microsoft','university of wisconsin - madison': 'university of wisconsin-madison',
'university of wisconsin madison': 'university of wisconsin-madison',
'university of wisconsin': 'university of wisconsin-madison',
'university of wisconsin, madison': 'university of wisconsin-madison',
'university of wisconsion-madison': 'university of wisconsin-madison',
'uw-madison': 'university of wisconsion-madison’,