Introducing: ICLR Reproducibility Challenge Interview Series

We interview four teams with a mission to reproduce the latest in machine learning research as part of the 2019 ICLR Reproducibility Challenge.

Cecelia Shao
Comet
3 min readApr 24, 2019

--

Reproducibility is a powerful criteria for improving the quality of research. A result which is reproducible is more likely to be robust and meaningful and rules out many types of experimenter error (either fraud or accidental). Fundamentally, good research needs to be reasonably reproducible in order for others to build upon the work.

Dr. Joelle Pineau, an Associate Professor at McGill University and lead for Facebook’s Artificial Intelligence Research lab (FAIR) in Montreal, has been one of the strongest voices in reproducibility. Since Joelle’s keynote on reproducibility in reinforcement learning, there has been powerful momentum building up around reproducibility:

At Comet.ml, our mission to enable reproducibility in both academic research and in industry. Comet is doing for machine learning what GitHub did for code. We allow data science teams to automagically track their datasets, code changes, experimentation history and production models creating efficiency, transparency, and reproducibility.

To encourage reproducible machine learning research, Comet.ml is 100% free for members of academia. Register for academic access here

ICLR Reproducibility Challenge Interview Series

The Comet team collaborated with the ICLR Reproducibility Challenge organizers to interview four teams that participated in this year’s challenge.

For the next two weeks leading up to ICLR, we will be publishing these interviews as a series. We asked each team about their approach to reproducing their selected paper, challenges they encountered while reproducing, and their reflections on machine learning research.

➡️ Follow us on Medium for new interviews

The details for the 2019 ICLR Reproducibility Challenge are as follows:

You should select a paper from the 2019 ICLR submissions, and aim to replicate the experiments described in the paper. The goal is to assess if the experiments are reproducible, and to determine if the conclusions of the paper are supported by your findings. Your results can be either positive (i.e. confirm reproducibility), or negative (i.e. explain what you were unable to reproduce, and potentially explain why).

Essentially, think of your role as an inspector verifying the validity of the experimental results and conclusions of the paper. In some instances, your role will also extend to helping the authors improve the quality of their work and paper.

These Challenge submissions were reviewed by exceptional reviewers from academia and industry (see the full list here). There was also an incredible organizing team behind the Challenge:

You can find the Reproducibility Challenge’s Github repository here. Accepted reports will also be published in the ReScience journal!

🌟 Follow us on Medium to be notified when our first interview with EPFL’s Francesco Bardi, Samuel Edler Von Baussnern and Emiljano Gjiriti, who reproduced “Learning Neural PDE Solvers with Convergence Guarantees” is published!

Interview #1 is out now — read it here

--

--