SIGCHI Equity Talks #3: Reviewing & Mentorship

Neha Kumar
ACM SIGCHI
Published in
14 min readMay 24, 2021

Neha Kumar & Shaowen Bardzell, SIGCHI VPs at Large

Previous Equity Talk Summary: Making SIGCHI Accessible
All Equity Talks (running March through June, 2021)

Shaowen and I (/Neha, both VPs at Large) started by introducing ourselves, and our motivations for hosting this roundtable in the first place. Cale Passmore, our moderator, then gave his introduction, before starting us off on these two threads and a question:

The first is a thread woven through both previous talks, applying to aspects of reviewing and mentorship here as well. The complications and oppressions that we face are not experienced equally. However, solutions to these oppressive practices and norms are to the benefit of everyone.

The second was a comment brought up by Melissa (Densmore), Stacy (Branham), and actually several others in our first talk, who commented “relationships, not mentorship.” However you personally define these terms, this comment calls us to reflect on the power we wield as mentors, reviewers, and scholars in relationship with one another. We’re brought to reflect on our motivations in teaching, review, and research, and to take account of the relationship between why we are doing a thing and the emotional, experiential, and social consequences of doing that thing. Reviewing is a mentoring process. It’s a relationship with other researchers, with society as a whole, with knowledge.

So I’ll open this up to anyone in attendance who has some starting thoughts for us. What are some of the challenges to a positive reviewing process you’ve experienced, and how might mentorship be leveraged to address those challenges?

Below is a summary of the responses, organized per the following themes: the reviewing process, standards around acceptance, epistemological tensions, and mentorship within/beyond reviewing.

The Reviewing Process

Towards ensuring that there are enough reviewers for papers to begin with, Molly Nicholas suggested that authors could be automatically signed up to review during the process of submission.

Maryam Mustafa noted that the goal of the reviewing process appeared to be to reject, not to mentor. She wondered if there was a way to shift the focus from looking to reject work towards mentoring that work, adding that whenever she had had the privilege of having her papers shepherded, it had been a joyful experience. Tamara Clegg responded to this concern by remarking on the workload that the ACs had to deal with. When there are a lot of papers, it becomes harder to go through these in detail, whereas in the shepherding process it is usually only 1–2 papers that the AC may be shepherding.

There was more feedback on shepherding, though. Nazanin Andalibi added that her paper was once shepherded, and “the shepherd clearly tried very hard to make my work about their work. I stood my ground, but just saying that shepherding can be a way of control and exercising power, and we need accountability and some form of quality control.” Vinoba Vinayagamoorthy responded by adding, “I would encourage conference organizers to have guidelines for shepherding, to figure out what the shepherding process is meant to improve for the benefit of the community.”

When reviewing went wrong, it was mentioned, conferences needed to offer a detailed appeals process. Also, ACs needed to be empowered to push back against rude comments and reviews: “‘I found this to be a challenging paper to read and review’ is not constructive.” Several participants felt that ACs could do more to actively shape the reviewing process: “Instead of just sending a brief email to say ‘please discuss,’ they could explicitly shape the conversation in different ways.” Priya Kumar concurred: “I once saw a CSCW AC ask a reviewer to revise language that was somewhat disparaging. And once I suggested a CHI AC revise their language in a meta-review to be a little more generous and they did. I offer this not at all to suggest that I’ve done something good, but to normalize the idea that we as reviewers/ACs etc. can invite/encourage people to revise their language/rethink their ideas in the review process.”

I (/Neha) also brought up possible equity concerns around awards, and what is really being awarded in a paper, mentioning that “at least at CHI, awards are discussed and given/nominated after the PC has seen the author names.” We discuss standards around acceptance next.

Standards around Acceptance

Geraldine Fitzpatrick pointed out that reviewers and ACs were given explicit instructions regarding looking for reasons to accept work, but the challenge was getting this to play out when it mattered. To this, Michael Muller responded, sharing from his own experience, that even though the advice was generally to accept work, when work was actually accepted, it could lead to problems because the larger community was not ready to deal with higher acceptance rates all of a sudden:

We want to find reasons to accept papers. That is true. Within a committee, we sometimes can work very hard toward that. And I won’t go into details, but I was on a committee recently, which worked really hard to do that. And we were slapped pretty badly for having insufficient standards, or not doing a quality job, because we accepted a greater percentage of papers than was the norm. And so I’m left with a quandary there about where do we need to be telling people about the need to look for reasons to accept? It may not be among the people on this call. It may be in the broader community, which was, as I said, pretty unhappy with us for taking more papers than they thought we should have done. And I don’t know what to do about that, but I sure wish I did.

Priya added to Michael’s point above: “I don’t want to detract from the reviewing/mentoring focus of today’s session, but I think this dispiriting focus on keeping the acceptance rate low is connected to the increasing numbers of papers that people are submitting. The sort of growth of HCI within this community. Which, of course, gets at the broader publish or perish pressure cooker in which many of us live. Perhaps we collectively also need to submit fewer papers, which is a comment I offer as a half joke.”

Firaz Peer and Emeline Brule both felt that there was not sufficient transparency regarding what articles were considered good enough for acceptance at a particular venue. Emeline shared that “the papers we wrote five years ago are just not the same as the ones [we] are writing now.” Committees could attempt to make the process clearer: “Like what are the kind of papers that we accept in this committee? Why do we accept these kind of papers? What are the kind of requirements? And making these agreements that tend to happen in informal discussions.” Firaz voiced the importance of making clear if and why standards for CHI were different from those for other venues. Susan Dray added: “There’s a huge tension between keeping up the brand as rigorous — for example, reject — versus encouraging scholars — for example, accept. At CHI, the former is key.”

Yolanda Rankin brought up questions regarding what made CHI the premier conference for HCI: “I’m saying this from the perspective of, who decides what is deemed acceptable work, and the criteria for doing so? This gets at the power structure that defines the review process. There are some reviewers who view themselves as gatekeepers, and as such, will intentionally set out to not accept certain papers. I’m not saying that CHI is not a good conference. I’m saying that these power structures benefit those whose work has been published repeatedly, in such a way that they become the experts, and in some cases, gatekeepers. Such power structures perpetuate the cycle of authors whose work is continuously rejected, and this has to be acknowledged and dealt with.”

Kashyap Todi added to Yolanda’s point: “I think this label of CHI being a rigorous conference with high standards is anyway somehow made up. We have this arbitrary acceptance rate of 25%. But it’s somewhat a well-known fact that about 50% of the papers that are in this middle region essentially are a coin toss. So they could go either way and you wouldn’t know. There wouldn’t be a way to actually judge this. So one way forward might be to actually consider thinking about drastically increasing the acceptance rate to something like, say, even 50%. And find other ways to judge quality of paper than just see whether they’re accepted or not.”

Somewhat relatedly, Naomi Yamashita brought up the point about CHI reviews, saying that she considered it problematic when reviewers lowered their scores when authors did not address issues pointed out in their reviews: “I think there should be some kind of rule or mechanism to allow reviewers to only raise their scores after a rebuttal.”

Geraldine reiterated that general and program chairs always said that every paper should be accepted on its merit, and then dealt with managing limited space, rooms, time, to include all these papers. Despite this, acceptance ended up in the 20s. Responding to Geraldine, Tess Tanenbaum noted:

If we eliminate the space-time constraints on paper acceptance, then what is to stop us from publishing every paper that is submitted, aside from those that make errors in their science? For example, statistical errors, errors in analysis, etc. So, we currently have these questions around how do we set a standard? How do we educate and inform that standard? And then how do we ensure or secure that standard, or those standards that are set? Especially when we’re thinking across different cultures, trying to actually be more equitable with marginalized regions, and to help foster good relationships with emerging scholars who maybe haven’t developed that thicker skin that a lot of us have had to at this point.

Epistemological Tensions

On assigning papers to ACs, Molly brought up the issue of epistemological mismatch, and how this process, if improved, could make reviewing more “fun and equitable” for everyone. Tess mentioned in response to this that she often felt “like a fish out of water in the SIGCHI reviewing processes.” She added:

And the challenge of how we create a reviewing environment that is supportive and constructive that is also capable of encompassing the breadth of disciplines that are covered underneath the banner of SIGCHI — it’s both our greatest opportunity and our greatest challenge, as far as I’m concerned. It’s extraordinary that somebody like me found a home in HCI, and found a community where my ideas and my work resonated enough with other people to find a place and make a contribution. Certainly, I wasn’t trained in HCI, and yet, there’s enough room inside of our community for perspectives like mine. It also means that I am perennially struggling with the fact that the vast majority of the people reading my work don’t have the same epistemological perspective that I have. They don’t have the same methodological grounding that I have. And this is the tension of our community that I love. That I’m in conversation with all of these perspectives, trying to find a way to position my ideas within a conversation that holds all of these different methodologies and expectations makes me a better scholar. But it also means I have to fight a lot harder to articulate the value of the work that I do. And it’s exhausting. I agree it’s exhausting. And it can especially be overwhelming and disheartening when you’re a young scholar trying to find a place in a field, and you feel like the field doesn’t get you, is hostile to you, is hostile to new ideas. And I’m not sure that I’m coming with any solutions, aside from saying I hear you, and I experience this. And I see it as both an opportunity, because I get to be here, and I get to have these conversations. And also, one of the challenges that we as a community face, especially because the reviewing process is the place where we are collectively negotiating what it means to make a contribution to knowledge. It’s where we’re collectively asserting our standards, our criteria, our ideas about what knowledge is or should be. And so it’s the place where we’re doing the most mentoring of each other, and it’s the place where we are really establishing what HCI is, what our identity is as a field.

Sheena Erete added to the point about epistemological mismatch, bringing up the experiences of those who work with “non-traditional populations, or populations that are outside of and/or take a stance that’s around the creation of technology. Or not, right? The non-creation of technology.” When reviews are received with bias in them, or racism/sexism in them, the process does not allow for these things to be addressed and rectified. Even if the review wasn’t potential, there needs to be a process in place whereby the reviewer can learn, and be reparative for the author who spent this time on the work but there was a bad match. This is especially needed for work that is new to HCI. Sheena concluded, “And I think we just have to think a little bit more about the process on the reviewer side, to how do they review papers that are non-traditional. And then also, how do we rectify when people do have these negative experiences with the review process?”

Adding to Sheena’s point about writing about non-traditional populations, Noopur Raval shared:

I just want to add two quick points to the last speaker’s point. One is that, in fact, I’ve heard these things so many times before, where my advisors or senior scholars have tried to just tell me that this isn’t personal. That I shouldn’t take a rejection as sort of an indictment of my work. But it’s obviously really hard to take that coming from Global South geographies, or writing about things where I am in the middle of them, for someone to tell me don’t take it personally. This is just what happens to knowledge that is at the margins of this field. My only problem, or something that I want the ACM community to broadly discuss is, when does this stop being the case? Even if someone tells you that you’re just a sociologist writing within HCI, I wonder if there’s a discussion possible around whose imagination of HCI, or whose community it was originally. Because I honestly, again, just like Tess, I’m not from this community. But I just happened to land up in an information school, and then this became a great venue to publish and get my work out quickly. And I wear my badge of honor as the fact that I never had a paper accepted at CHI, but have like two or three papers at CSCW. And I take that as indication just of the fact that I feel like there’s something going on there. So, I don’t know, make of it what you will, but I feel like there’s a larger discussion waiting to happen. And I did hear at one of the town halls that happened after the conference, where there was some tension around the systems track. And I wonder if that also is a clue to an original community around which this conference, or these conferences, were centered, and whether something has shifted fundamentally.

On language, a point that comes up frequently, reviewers often comment on the need for revision by “a native speaker”, but this carries a harmful meaning about ownership, Andre Freire pointed out, and it is important to give feedback asking for “language improvements” and “without traces of colonialism”. Stacy Branham added that she would like to see a SIGCHI where we can publish in our languages of choice and produce translations in other languages: “Could we have a community where expert English language skills are not a prerequisite for contributing to HCI knowledge?”

Nic Bidwell pointed out the need to think carefully about our values as HCI researchers. “In a huge discipline like HCI, we have a lot of industry precedence, and we all know that at the moment, this is creating a lot of issues for us. Well, it’s always been there, but they’re very obvious now, in all aspects of our experience, with social media and what have you. And that creates a sort of a culture, and a sort of set of values that young people who are going for tenure — or any people going for tenure, in the States or Europe — sometimes feel they can’t question. That that is the only way. But you and I both know that people’s lives in Southern Africa are full of a much broader variety of things. Some people, as you say, would like to take those roles in the States and Europe, but they can also bring a lot of other things to a conversation about what matters, and how to care, and how to think about things. Which is not purely about whether it’s an indigenous knowledge system, or whether it’s a lack of connectivity in the rural areas. Sometimes this is about the role of family. Role of religion. Role of other things, all of which affect how we interact, and the relationships we have with people. So, yes. I think it is a very deep discussion, and I think it’s a very rich and important one that we can always learn from other people.”

Mentorship (within reviewing and beyond)

Reviews can be problematic. Katta Spiel pointed out that they once had a review telling them “the author is too critical of the field of HCI and its methods” and added, “I feel like it’s a badge of honor.” Geraldine remarked that she had seen subcommittee/associate chairs check in “to help foster a more constructive tone in reviews.” Tammy mentioned that “equity officers” could be introduced to watch out for equity-related issues, comments, etc. — this would be a different role from a reviewer’s.

Maryam mentioned that it was important to “think about ways to mentor PhD students starting their journey, who will eventually become ACs and reviewers. In an earlier session, one mentioned shadow PCs.” Melissa Densmore supported the suggestion to incorporate shadow PCs (used for ACM Compass 2021) and revise-and-resubmit processes. Tayana Conte mentioned, “I do think that the unofficial guide is great. It’s great when it points out the bias, and so on. But sometimes we need even more basic training. Even discussion, so good to have, and to have these points of mandatory discussions were great for us as a community.”

Melissa remarked that although the “unofficial guide” was great, it was really long and ACs/reviewers were given a lot of reading material to sort through. Geraldine affirmed that it was important for busy reviewers to receive reminders that were easier to access and digest: “We have guidelines, but I know many people are too busy to read them. And also, think they know what reviewing is, and don’t need to review them.” Nazanin added that in addition to guidelines, there was the problem of the lack of accountability, “about what structures might we be able to put in place so that the types of gatekeep-y or harmful or power play type AC or review behavior doesn’t get rewarded, and doesn’t get to happen.”

Naomi talked about mentoring from the perspective of a non-native English speaker, saying that increasing the mentor pool was important, and this could be done by improving the matching of mentors to mentees so that they could talk in their first language. This, she felt, could potentially strengthen relationships. She shared that she herself was hesitant to take on the mentor role, because mentoring in English would require more than double the effort of mentoring in Japanese, and she was not sure if junior reviewers (e.g., PhD students studying in the US) would want guidance from researchers in other countries: “So, what I’m trying to say is I believe introducing a matching mechanism that takes geographical location or language into account would motivate more people to sign up as mentors.” Nic said (and Judy van Biljon agreed) that she would like to see “more non-US Euro mentoring US/Euro researchers, or at least having a chance to comment, because it’s all too often a one way perspective of what counts/how to develop. And of course, that brings us back into questions about different types of colonialism, even, a further epistemological mismatch.”

Michael shared that the CSCW conference had the student reviewer mentoring program, where students could apply to be mentored by senior people: “Then there’s a matching process, and the student becomes a reviewer, and the mentor is available to advise or comment on the review. The authors receive thoughts from two minds in a single review. Could we do this in other conferences? We would need to find a lot of mentors to make this work.”

Sheena also brought up mentorship beyond the review process: “Who is excluded and being monitored? Those who tend to be on the margins typically do not have opportunities to be mentored, or to be in the mainstream HCI. But the bigger question is, how should we mentor young scholars beyond getting a paper accepted to CHI? And do we mentor or advocate for young scholars without overburdening scholars who are non-majority/traditional in their HCI research and/or identities?”

We had to bring our conversation to a close at this point, but with hope to continue this discussion about mentorship on a later date. Meanwhile, stay tuned to learn more about Infrastructuring Equity from Equity Talk #4.

These are the sketchnotes that Miriam created to give a visual representation of all the themes covered in this blog post.
Missed the talk? See the sketchnote summary. Thank you to our sketch artist, Miriam Sturdee!

--

--

Neha Kumar
ACM SIGCHI

Associate Prof at Georgia Tech; SIGCHI President