NLP Should Anonymize Model Names during Peer Review

Ryan Cotterell
2 min readJan 23, 2019

--

The *ACL community prides itself on taking double-blind peer review seriously. Just read here about our arXiv policy — -has any ML conference put up an arXiv ban around the submission deadline to help protect its authors from various biases? If you want a longer read, check out the full commission report here. But, we currently have a problem with deanonymization through the naming of models during peer review. Indeed, a friend of mine, reviewing for NAACL 2019, sent me this link: https://github.com/anonymous/bert. If that didn’t give away who the authors were, the title, BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding, certainly would have. Really now, who hasn’t heard of BERT? This is unacceptable as it deliberately undermines our community’s decision to enforce double-blind peer review to the best of our ability. While it may be within the letter of the law to keep your pre-hyped model name in the title, it certainly is not within the spirit of it.

So, rather than being purely unconstructive, I’m going to propose a solution. The programming languages community has already solved this exact problem. Most research groups there write many a paper focusing on the programming language that their group is developing. Having the name of the language in the paper would deanonymize the submission, so they are forced to use a macro, like ZZZ, to force the reviewers to focus on the content of the paper, rather than which company it came from. Why not do this for NLP models? Here’s a snippet from the PLDI 2019 website:

Q: What exactly do I have to do to anonymize my paper?

A: …. In general, you should aim to reduce the risk of accidental unblinding. For example, if your paper is the first to describe a system with a well-known name or codename, or you use a personally-identifiable naming convention for your work, then use a different name for your submission

So, why not do it for NLP? Would you have recognized the BERT paper without the name in the title? I am not so sure I would have, since most papers on hypey neural architectures start to look the same after a while…

--

--