Emerson interview (part 2); writing for HCI venues

Amy J. Ko
Bits and Behavior
Published in
4 min readOct 1, 2009

Here is part two of Emerson Murphy-Hill’s interview with me. This part covers some of the challenges in publishing in HCI venues.

Q: A prominent proponent of empirical software engineering once told me that that he typically spends a full page discussing the threats to validity of his evaluations. At the same time, it’s not unusual to find a CHI paper that doesn’t discuss threats. How does one choose which threats to include and exclude, and how to present those threats, to the CHI community?

Most CHI papers clearly discuss threats, just not in a section titled “threats to validity.” This tradition comes from CHI’s cognitive psychology research, where the threats were inherent to the study design and discussed throughout the method and discussion sections. There never needed to be a separate section because it was expected that discussion of the limitations would appear throughout the article. As a guideline, one should always discuss all non-obvious threats to validity. Its a necessary part of honest scholarly work.

Q: Where do you draw the line about whether a threat is obvious?

Some threats are common to all empirical research: the sample size was to too small, the study may not generalize, situations may not have been representative. These are standard disclaimers and its always worth mentioning them briefly. The ones to really spend time on are the definitions and measures one uses and what likelihood they have at actually capturing the concept of interest (the construct validity) and whether they have any meaning for the real world (ecological validity).

Q: Have you had an HCI reviewer suggest that your work is better suited for a software engineering venue, or vice versa? If so, how did you deal with the suggestion? If not, how do you think you preempted it in the first place?

No, I’ve never had a reviewer suggest that. Of course, the work that I publish at HCI venues usually has more to do with the actual work of software engineers, their collaborations, or their interactions with users, as opposed to conventional software engineering research on automation. I think one of the main stumbling blocks that software engineering researchers will have trying to publish at HCI venues is demonstrating that the problems they work on are of significance. For example, a common type of software engineering paper will find some specific set of circumstances that can be exploited to automate bug finding or prove correctness within a certain set of assumptions. In general, HCI researchers aren’t interested in these types narrow contributions, unless there’s some good evidence that the set of circumstances exploited is large and generalizable to some degree.

Q: In an HCI paper, where do you make the argument about generalizability? Is there room for speculation?

Andrew: There’s always room for speculation. That’s what discussion and limitation sections are for. The whole point of studies is to use a kernel of rigorous and trusted analysis in order to make predictions about the larger context of the world. In fact, I think too many software engineering papers simply report results and ignore what impact a tool design or study might have on our understanding of software engineering. Tools, after all, are embodiments of theories about the world, and they have just as much potential to teach us about our surroundings as studies — perhaps more.

Q: As a reviewer for HCI venues, what is the most common mistake that you see software researchers making?

Being more fascinated with technology itself than what technology does for people (whether those people are technology users or hardcore software developers). More often than not, I will read software engineering papers published at HCI venues that try hard to persuade me that the clever tricks they devised are interesting enough to overcome the minimal impact the tricks will have on users’ work and experience with a tool.

I also see software engineering researchers try to make knowledge contributions about software development practice without citing the large body of work done at CSCW and other conferences about group work. HCI researchers tend to view software development as just one of many examples of collaborative work. The argument that its special and unique usually doesn’t fly without evidence.

Q: Although HCI submissions are often anonymous, people tend to be suspicious of “outsiders,” and may treat outsiders’ work with some undue hostility. What can a software researcher do to avoid identifying himself as an outsider in the HCI community?

All HCI researchers are outsiders. There’s not enough of a concentration on any one topic or problem for there to be a common core. The best thing to avoid sounding naive is to read as much about a topic outside of your discipline as possible. HCI draws from cognitive science, psychology, design, computer science, engineering, anthropology, social psychology, communication, education, and several other fields. Chances are, there’s work in all of those fields you should at least be aware of, if not read and cite.

Q: Suppose that you attempt to solve a usability problem for a certain kind of software tool; HCI researchers may perceive that you are solving only a very narrow problem, and thus your contribution is small. How do you deal with that?

The typical solution to this problem is finding a community that thinks your problem is broad instead of narrow. HCI research tends to have a fairly broad view of the world, since its so applied, so understandably, many problems will viewed as small (just like any non-academic would view our problems as narrow). The best one can do is demonstrate what relation the problem has to society and what impact it might have on the world — not just on the tool users.

Q: Where should software researchers send their human-centered papers?

Anon: CHI is an obvious choice, but it’s the premier conference in HCI, which makes it a difficult target even for very experienced HCI researchers. Beyond CHI, what would we recommend? I’ve been investing in VL/HCC, the logical successor to the sadly defunct Empirical Studies of Programmers (ESP) conference. It’s a strong secondary conference, with a first-rate community, but with less content about professional SEs than I would like. I’m hard-pressed to recommend another HCI conference.

--

--

Amy J. Ko
Bits and Behavior

Professor, University of Washington iSchool (she/her). Code, learning, design, justice. Trans, queer, parent, and lover of learning.