Changes coming to the ACM ICER conference

Amy J. Ko
Amy J. Ko
Aug 23 · 6 min read
Conferences evolve, but it takes time, a lot of volunteer effort, and a lot of debates (on Twitter!).

I first attended the ACM International Computing Education Research conference in 2013, around the time it converted from a workshop to a conference. My first impressions were largely positive: the work was reasonable and sometimes great (with the usual “how did that get in?” reaction true for any academic peer review process). I found the community welcoming and open to feedback, and ultimately left impressed that its founders had taken some of the best elements of conferences and curated a context for discourse that was constructive.

Of course, the conference wasn’t perfect. I felt like the review process and culture had a lot of room for improvement to become an outstanding, inclusive, impactful venue for sharing computing education research discoveries. But it was already off to a good start, and I sensed it had a large capacity for change. Since then, many program chairs have made steady improvements. Committed to the community and the conference, I have published at it regularly, bringing many of my PhD students and undergraduate researchers to the conference, and I have served on the program committee several times as a junior and senior member. Most recently, I’ve organized it’s doctoral consortium for 2 years.

After participating in the community for 6 years, and hearing many complaints from within and outside the community, I decided to serve the community in a larger capacity by applying to be a junior and senior program chair. I was selected last March to serve for two years (2020 and 2021). Once I was selected, immediately started gathering more systematic feedback about how the community would like me to my time:

  • I’ve engaged many in the computing education research community on Twitter, documenting some of the most frequent concerns.
  • I’ve 1 on 1 conversations with junior and senior people about how they perceive the community
  • I’ve made my own subjective observations about the conference’s strengths and weaknesses, informed by my own students’ experiences (especially newcomers).
  • I’ve learned about the experiences of 40 doctoral consortium attendees, most of whom attended for the first time.
  • I’ve gathered insights, history, and feedback from past program chairs and site chairs.
  • I’ve spoken to past and present SIGCSE board members about their perspectives, goals, and interests in the conference.
  • I’ve documented the feedback from the conference’s open-ended survey about attendee experiences.
  • I’ve captured feedback that people have given me face to face and over email since they learned I would be program co-chair for the next two years.

Below, I enumerate the most frequent problems I’ve heard in all of this feedback and share some of the plans that Anthony Robins and I have for addressing the feedback for 2020 (and/or in 2021, when I serve as senior program-chair):

  • Lack of transparency. A large number of people report feeling like the conference planning happens secretly, without consultation with the broader community, and without communication to the broader community. Anthony and I want to address this by gathering community’s insights on changes we make and by communicating regularly about changes (as this blog post is attempting to do).
  • Program and site chair workload. In addition to having to plan a conference event and program, the program and site chairs for the conference have also had to recruit future program and site chairs. It’s hard enough to recruit chairs for 2 years, and adding this future planning work only makes it harder. The SIGCSE board agrees that it’s time for the conference to have a steering committee that’s responsible for future conference leadership recruiting. We’re working to set up the inaugural committee now, pending SIGCSE board approval.
  • Narrow program committee experience. A major complaint about reviewing, especially amongst learning scientists interested in contributing to computing education, is that the program committee often lacks expertise in qualitative methods, advanced quantitative methods, and programming languages, making it hard to publish work that doesn’t fit past committee’s expertise. Once reason for this narrowness is that in the past, the program committee members have had to be past ICER authors and past ICER attendees. We’re seeking to change this policy, and invite a much more intellectually diverse committee in future years that can properly handle more diverse submissions.
  • Restrictive page and references length. Researchers who submit qualitative work, which often requires more space to describe because data is natural language rather than numbers, often feel at a disadvantage with only 8 pages. Additionally, there are deep concerns about the ability to publish replicable work, as 8 pages artificially reduces space for key methodological details. Finally, the 2-page reference limit artificially reduces authors’ ability to properly cite prior work. Anthony and I propose to move to up to a 10 page limit, plus no limit on references, as most other ACM conferences have done.
  • Reviewing consistency. Most of the typical problems with peer review have applied to ICER as well: 1) too short reviews, 2) “checklist” reviewing, where reviewers recommend accepting or rejecting something because they don’t like the topic, they didn’t like the title, or 3) they don’t believe the results have immediate practical relevance in the world. Other conferences have had some success with reviewer trainings; we’re going to try one this year, including some policies, and instruct meta reviewers to to enforce those policies when evaluating the reviews they summarize. We hope to crowdsource guidelines from the community.
  • Program chairs overriding reviewers and meta-reviewers. The concentration of power amongst the program chairs, as in any conference, is problematic. Following the best practices at other conferences, Anthony and I are exploring holding a remote PC meeting with the meta-reviewers to discuss borderline papers, ensuring that all paper discussions are consensus-driven by a representative group of senior reviewers, not just the individuals how happen to be chairing the program in a year. This should bring more consistency between years, and restore some faith in the review process reflecting the community’s standards (at least to the extent the PC reflects the community).
  • Reporting standards. There are an increasing number of standards that our community has discussed around the use of theory, the reporting of statistics, and the discussion of participant demographics with respect to diversity. Anthony and I are going to crowdsource an authoring guide to provide authors who submit to ICER a guide for how to report and discuss these various aspects of work. Of course, we don’t expect the community to agree upon everything, and so the guide will be recommendations and links to writings that authors should reflect, but we hope these recommendations will both help new authors know how to report their work but also nudge past authors towards more consistency.
  • Publishing all progress. I heard from many past chairs that there have been acceptable papers that were rejected due to limited time in the conference’s single-track program. Most sentiments in our community are that this is an unacceptable constraint on progress in our field. Anthony and I will do our best to ensure that the review process doesn’t reject papers for this reason. To achieve this, we’ll explore a range of strategies for ensuring the program can accommodate all publishable work (e.g., shorter talks, extending the third day, and other lightweight approaches to parallelization in a single room).
  • Inclusive session chairing. A number of participants in our community who are highly sensitive to sounds have struggled with the varying strategies that session chairs use to transition from table discussions to Q&A (loud claps, shouts, etc.). Anthony and I will crowdsource session chair guidelines so there’s a consistent set of strategies that are inclusive to all attendees.

There are a large number of other ideas we’re discussing that may or may not be implemented this year, including revisions to awards, scaffolding for networking on nights without organized events, more diversity in the program session formats, presentation guidelines, and more radical things like remote participation.

We’d love to know what you think of our plans:

  • What problems are missing from this list?
  • What do you think about our proposed solutions to the above problems?
  • How would you like to engage in helping us implement the proposed solutions?

Send your feedback to ajko@uw.edu and I’ll organize it along with all of our other priorities.

Obviously, since the ICER organizing committee for 2020 is entirely volunteer, run, so we can’t implement every change and still do our day jobs as researchers, teachers, and administrators. Moreover, since some of the changes we make might not be approved by ACM or the SIGCSE board, we can’t promise that we’ll successfully make all of them this year. We are, however, committed to listening, to transparently communicating about our progress, and seeking your input at all stages.

We hope to see you at ICER 2020 in Dunedin, New Zealand, August 10–12!

Bits and Behavior

This is the blog for the Code & Cognition lab, directed by professor Amy J. Ko, Ph.D. at the University of Washington. Here we reflect on what software is, what effects it's having on the world, and our role as public intellectuals in help civilization make sense of code.

Amy J. Ko

Written by

Amy J. Ko

Associate Professor @UW_iSchool, Chief Scientist+Co-Founder @answerdash. Parent, feminist, scientist, teacher, inventor, programmer, trans, human.

Bits and Behavior

This is the blog for the Code & Cognition lab, directed by professor Amy J. Ko, Ph.D. at the University of Washington. Here we reflect on what software is, what effects it's having on the world, and our role as public intellectuals in help civilization make sense of code.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight.
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox.
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month.