Tasty Stats from the CSCW 2019 Papers Process

david ayman shamma
ACM CSCW
Published in
3 min readNov 22, 2019

CSCW 2019 was the last of the annual cycles for submission. The Papers Chairs and the 121 Associate Chairs and the 786 Reviewers all worked hard to take the 658 papers through the process and really discuss contributions, deliver quality reviews, and deliver a program with 205 successful submissions (accepting at 31.2%). Along the way, we wanted to show how the scores and decisions shifted over the course of the review process. Note however, we didn’t account for scores as cutoffs during the process and the scores are changing for 2020, so it will be good to see the shape of decisions moving forward too.

A bar chart showing the numbers of accepts, rejects, and withdrawals from round one and round two.

First off we’ll start with the final breakdown of scores. Here we see 29 desk rejects and 306 rejects after Round 1. There were 16 withdrawals during Round 2 and 102 rejects. This left the 205 accepted papers for the main program (including the 16 shepherded papers).

A bar chart showing the decisions broken out by primary paradigm. Here Design, Theory, and Systems are more selective tracks.

If we look at this by Primary Paradigm, we see that there were more Qualitative papers (at a 43% acceptance rate) in the program than all other paradigms combined. Design had the most competitive papers accepting at 9.5%, so we see the 2019 process is harsher on Design, Theory, and Systems versus the other paradigms.

A histogram  of Round 1 Reject or Revise decisions.

After Round 1, we see most papers with a mean score greater than 2.5 being recommended for Round 2. We took this as recommendation from the ACs and the Reviews not via a score threshold. After Round 2, we see scores shifted as the papers were strengthened (or not).

A histogram of the score change amount after Round 2 showing more scores were increased than decreased.

When we look at the change in mean score for the Round 2 papers we can see most papers moved by a whole point plus or minus. This leans more positive, which was good to see. Note there are several papers with all 5s at this point.

A histogram of the final decisions and scores after Round 2 showing a bimodal distribution.

The final distribution of scores and decisions from both rounds is now moreso bimodal; this highlights the change in scores after revision.

A histogram of the Round 1 scores with Round 2 decisions marked.

This last histogram (also my favorite) shows the distribution of scores at the end of Round 1 but also visualizes the future outcome of those papers at the end of Round 2. As expected, papers in the Round 1 score range of 2.5 < µ < 3.5 account for most of the R2 Accept/Shepherd/Reject.

Overall, we find a positive effect here creating better papers and accepting 2/3rds of the submissions in Round 2. During the entire process, we pressed on the reviewers and ACs to highlight the contribution of each paper and objectively look for technical strengths and weaknesses. We never really accounted for scores during the process (I only generated these histograms after the conference). We did not want to just cut a threshold of accept by mean score and are super happy that the process created a positive effect and better papers by accepting more of the Round 2 submissions. Thanks to my co-chairs Airi and Darren and all our ACs and reviewers!

--

--

david ayman shamma
ACM CSCW

scientist @toyotaresearch past: @FXPAL @cwi_dis @yahoo @flickr @sigchi. Distingushed in the @ACM. instructions: place in direct sunlight, water daily.