Code Ocean teams up with NeurIPS to support their 2019 Reproducibility Challenge

Code Ocean
Code Ocean
Published in
2 min readNov 19, 2019

--

This year, we are sponsoring the NeurIPS Reproducibility Challenge ahead of their 2019 conference.

The challenge is part of the reproducibility program that the NeurIPS program committee rolled out this year to support high-quality scientific research. Participants will replicate experiments described in a paper of their choosing from 1,430 accepted papers at NeurIPS. They will assess whether the experiments are reproducible and if a paper’s conclusions are supported by independent investigation. The goal is to provide a fun learning experience for newcomers to machine learning while also contributing to original research. There are three tracks at the challenge, each with different levels of reproducibility requirements.

From our experience, the first step for reproducing a computational finding is to recreate the computational environment used in the study. Traditionally this step requires both software skills and hardware resources (in the case of AI, GPUs can make a huge difference). Code Ocean is offering our GPU-enabled cloud platform for participants to use for free in any of the 3 tracks. After creating a Code Ocean account, participants can configure a ‘compute capsule’ quickly by modifying existing images pre-built with popular deep learning frameworks (e.g.Tensorflow and PyTorch). Once a team member configures a capsule, all other members can immediately use the same environment on Code Ocean, and they can also download the capsule’s Dockerfile to recreate the environment locally. Our goal is to make reproducibility and collaboration as easy as possible for all participants. As of November 11, 2019, there are 150 claimed papers in the challenge, and it will conclude on December 27th.

Reproducibility in AI is challenging for many reasons [1]. We at Code Ocean believe that in addition to being transparent and publishing code, data, environmental configuration, etc., making the whole project readily executable for other researchers without any upfront time spent on installation can significantly lower the barrier to reproducibility. From our experience, the earlier a research team adopts a reproducible workflow in their project, the smoother it will be to share the work when it is time to publish.

During the next month, we’ll be posting periodic updates about the work the teams are undertaking, as well as retweeting some of the teams’ updates via our Code Ocean twitter account. After the conference, we’ll post a recap of insights from our time at the conference.

[1]: Deep Reinforcement Learning that Matters https://arxiv.org/pdf/1709.06560.pdf

Xu Fei is Outreach Scientist at Code Ocean

--

--