Brief Report,

New Ideas and Emerging Research

by Tim Menzies and Abhik Roychoudhury ,


The aim of the track was to promote New Ideas, and there was a general feeling that not sufficiently many “new ideas” were being submitted. There were also explicit attempts to craft the Call for Papers (CFP) in such a fashion that the track does not attract short ICSE-style papers, regular conference papers with less evaluation. The aim was to structure the CFP in such a way that it attracts submissions with new ideas, rather than old-style papers with less evaluation. In this context, the NIER track chairs, had substantive discussions with ICSE15 general chair, and ICSE15 program chairs — which was extremely helpful.


We structured the CFP so that submissions have to be in one of three categories. It was also explicitly stated that submissions do not need to have evaluation — though they may be presented if available.


  1. Reflections (on the past) such as:
  • Startling results that call into question current research directions;
  • Bold arguments on current research directions that may be somehow misguided.
  • Results that disregard established results …

2. Initiatives (recently funded new ideas):

  • Summaries of newly awarded, highly innovative, large multi-year research grants.

3. Visions (of the future):

  • Bold visions of new directions which may not yet be supported by solid results but rather by a strong and well-motivated scientific intuition. An example of such a vision can be unusual synergies with other disciplines, or the importance of software engineering in problems whose software engineering aspects have not been studied earlier.

The submissions

There were 135 submissions and only 2 submissions were desk rejected — the others were suitable for review. Out of these 133 — the division among the 3 categories are as follows

  • Reflections of the past — 18 papers
  • Initiatives — 40 papers
  • Visions of the future — 75 papers

Here, we believe that most NIER papers are cast as “Visions of future” — even when they could have some degree of reflection. Some of these are really reflections and they were cast accordingly. The papers which describe recently started research initiatives were also explicitly marked as such. The review process The review process required the PC members to upload half of their reviews half-way through the process. As a result, the chairs got time to look through the reviews ahead of time. All PC members were reminded against recommending short ICSE style papers, and instead focus on the ideas. In the final discussions — this was explicitly raised in every paper. Generally speaking, PC members were OK to upgrade a paper with a “lower” score — if it could be shown that the paper really has a new idea. However, they were more reluctant to downgrade a paper with a “higher” score — albeit the paper being a well-polished incremental paper. This is one area where the chairs spent a lot of time discussing with the PC members, so that papers with new ideas get accepted.

The acceptances and slot in the conference

We accepted 25/133 papers, about 18% acceptance rate. Furthermore, out of the 25 papers, only 12 papers were accepted as long presentations of 20 minutes, the remaining 13 were short presentations of 10 minutes. All papers, long or short, got a chance to present a poster in a NIER poster session.

The acceptances in each category were

  • Reflections : 1/18 papers
  • Initiatives: 4/40 papers
  • Visions: 20/75 papers

Brief note about the papers

Since the CFP was worded with strong wording on “new ideas” and explicit categorization, this could have discouraged some authors from submitting short ICSE style papers with less evaluation. Generally, we feel this is welcome since we want to focus on the submission quality apart from also focusing on submission numbers.

Each session had a mix of 20 minute timeslots and 10 minute timeslots allocated to the papers. All the papers were presented in a poster session. Generally we have found that accepting some papers with lower reviewer scores — where the score was probably because there is a contentious claim in the paper — generated a lot of discussions and healthy debate. In some sense, this tells us that for NIER papers, even more than conference papers — there is a need to tune selectivity criterion. This is because the PC members will have the usual selectivity criterion in mind which is different from NIER criterion.

We feel that the emphasis on non ICSE style short papers helped us get more suitable submissions ( evidence: Wednesday sessions were packed, high energy and with high degree of novelty and engagement). Also, the role of the chairs seem crucial for NIER — if they want to encourage the PC to not rank down non-standard papers since these are exactly the papers that will generate substantive interesting discussions.

Acceptance rate for the three categories are very different. So, it is useful to discuss how or even whether the “reflections category” is generating value. In the “initiatives” category — we however received more submissions, and the paper receiving best paper award was also from that category.

One side remark w.r.t. the “Initiatives” category is that whether people would be willing to submit in this category — instead they would want to write full ICSE papers as results from their grants. Based on the number of submissions — we feel that this may not be a concern.

NIER also seems to attract lot of attention and lot of attendance in the sessions — on Wednesday we noticed that the sessions were overflowing. So, perhaps a bigger room could be allocated in future years.

The combination of the long and short papers worked quite well. However, since a NIER paper is 4 pages — we did not go for short NIER papers since they may be too short to capture the idea. All papers received 4 pages — but papers were differentiated based on the presentation length. In our experience, this division in terms of presentation length worked pretty well — since the aim of a NIER paper is not so much the publication itself, but rather the discussion it generates. So, distinguishing papers based on presentation length appears more appropriate than distinguishing them based on paper length.