Raia Hadsell, Kyunghyun Cho and I have long been involved in the organization of the conferences and journals that support our scientific publication ecosystem. The field of machine learning has grown exponentially over the last decade, and we have observed firsthand the pains that come with such growth. Some of these pains are practical, manifested in the crashing review platforms or overcrowded poster halls. But others strike deeper, revealing themselves through disenchantment with the exclusivity of our top conferences, or criticisms that conferences don’t seem to be sufficiently successful at highlighting the most impactful work, have slow turnaround from submission to decision, create high-stakes pressure and stress around a handful of fixed deadlines, and are perceived to be diminishing in the quality of the peer review process. These are hard problems to solve, and although we ourselves have worked to address them, we feel that there is still work to do. In that spirit, we have been working on a new contribution to our publication ecosystem.
With this post, we’re happy to announce that we are founding a new journal, the Transactions on Machine Learning Research (TMLR). This journal is a sister journal of the existing, well-known Journal of Machine Learning Research (JMLR), along with the Proceedings of Machine Learning Research (PMLR) and JMLR Machine Learning Open Source Software (MLOSS). However it departs from JMLR in a few key ways, which we hope will complement our community’s publication needs. Notably, TMLR’s review process will be hosted by OpenReview, and therefore will be open and transparent to the community. Another differentiation from JMLR will be the use of double blind reviewing, the consequence being that the submission of previously published research, even with extension, will not be allowed. Finally, we intend to work hard on establishing a fast-turnaround review process, focusing in particular on shorter-form submissions that are common at machine learning conferences.
As these are all features of conferences like NeurIPS or ICLR, we hope that TMLR will become a welcome and familiar complement to conferences for publishing machine learning research. TMLR will also depart from conferences’ review process in a few key ways.
Anytime submission Being a journal, TMLR will accept submissions throughout the year. For this, we will be implementing a rolling review process which will be executed on a per-paper timeline.
Fast turnaround We are implementing a review timeline that will provide reviews to papers within 4 weeks of submission and decisions within 2 months. To enable this, we will implement a capped workload for action editors (the equivalent of conference area chairs) and reviewers so as to remain lightweight throughout the year, while also requesting a commitment to accept all assignment requests.
Acceptance based on claims Acceptance to TMLR will avoid judgments that are based on more subjective, editorial or speculative elements of typical conference decisions, such as novelty and potential for impact. Instead, the two criteria that will drive our review process will be the answers to the following two questions:
- Are the claims made in the submission supported by accurate, convincing and clear evidence?
- Would some individuals in TMLR’s audience be interested in the findings of this paper?
The first question therefore asks that we focus the evaluation on whether claims are matched by evidence. If they are not, authors will be asked to either provide new evidence or simply adjust their claims, even if that means the implications of the work are reduced (that’s OK!). The second, though somewhat more subjective, aims at ensuring the journal features work that does contribute additional knowledge to our community. A reviewer that is unsure as to whether a submission satisfies this criteria will be asked to assume that it does.
Certifications This will be a unique feature of TMLR, which is aimed at separating editorial statements on submitted work from their claim-based scientific assessment. An accepted paper will have the opportunity of being tagged with certifications, which are distinctions meant to highlight submissions with additional merit. At launch, we will include the following certifications:
- Outstanding Certification, for papers deemed to be of exceptionally high quality and broadly significant for the field (along the lines of a best paper award at a top-tier conference).
- Featured Certification, for papers judged to be of very high quality, along the lines of a conference paper selected for an oral or spotlight.
- Reproducibility Certification, for papers whose primary purpose is reproduction of other published work and that contribute significant added value through additional baselines, analysis, ablations, or insights.
- Survey Certification, for papers that not only meet the criteria for acceptance but also provide an exceptionally thorough or insightful survey of the topic or approach.
Sections on the TMLR website will be dedicated specifically to these papers, in order to give them additional visibility.
Reviewing assessment and rewards Following recent practice by conferences, reviewing quality will be monitored through evaluations of reviews by action editors. Based on these assessments, we intend to reward the best reviewers by assigning their submissions the Expert Certification, with the implications that their work will benefit from this additional publicity. We will also make our best reviewers list available to the organizers of conferences in our field, and encourage them to consult it when considering candidates for roles such as area chairs.
TMLR is in many ways an experiment. The ideas mentioned above will certainly evolve and adapt with time, as the community embraces it. We hope TMLR can also contribute to the conference publishing ecosystem. For example, we could imagine a future where submissions to TMLR could also be featured at ML conferences through a journal-to-conference track, with such submissions being given a conference-specific certifications. We could even imagine a larger ecosystem of smaller conferences and workshops blooming, where each subcommunity would be given editorial ownership of a dedicated TMLR certification and be in charge of deciding which papers to highlight for their community. The ML community has demonstrated great innovation in approaches to science dissemination, and we hope TMLR can become an enabler of more transformative changes to come.
Finally, TMLR will only be as successful as the amount of effort that our community decides to put in it. Therefore we hope you will join us on this adventure. First, if you receive an invitation to join our team of action editors and reviewers, we hope you will accept. The coming weeks will be spent sending these invitations. Additionally, we will soon be entering the final testing stage of our OpenReview workflow, to ensure we are well prepared to start receiving submissions some time in March. To that end, we are seeking volunteers. You can send us your interest in helping out through this form. We also welcome feedback and suggestions regarding this initiative, which you can submit through this form.
We want to thank the OpenReview team for their invaluable support in making TMLR a reality, and the Editors-in-Chief of JMLR (Francis Bach, David Blei, Bernhard Schölkopf) for welcoming TMLR in the “MLR family”.
We also thank Fabian Pedregosa for joining us as Managing Editor, as well as our Advisory Board which has already provided us with tremendous feedback for the drafting of our reviewing guidelines:
- Alexandra Chouldechova
- Andrew McCallum
- Bernhard Schölkopf
- Devi Parikh
- Konrad Körding
- Lillian Lee
- Natalie Schluter
- Shakir Mohamed
- Yoshua Bengio
We are also thankful for the many discussions we’ve had in the past several months with other members of our community, notably members of the CIFAR Learning in Machines & Brains program, on a number of ideas that are behind TMLR.