Making Kaggle the Home of Open Data

Publish your data on Kaggle to share it with 600k+ data scientists

Kaggle is best known for running machine learning competitions. These competitions have helped classify whales in the oceans and galaxies in the sky; they’ve helped diagnose diabetic retinopathy and predict ad clicks.

Today, we’re expanding beyond machine learning competitions and opening Kaggle Datasets up to everyone. You can now instantly share andpublish data through Kaggle. This creates a home for your dataset and a place for our community to explore it. Your data immediately becomes available in Kaggle Kernels, meaning that all analysis and insights are shared alongside the dataset.

This is the most recent step in a big set of changes. Last year we launched Kaggle Kernels (originally named Scripts), a reproducible data science environment, to help our community work together on competitions. Six months ago, we launched Kaggle Datasets with a handful of datasets we curated. Last month we revamped our profiles, allowing users to show off their kernels and contribution to discussion as well as their competition performances.

All this is to take Kaggle beyond competitions: our mission is to help the world learn from data, and we want to be the place where data scientists come for all of their data science. For the software engineers reading this post, think Stack Overflow, GitHub and TopCoder rolled into one data-focused platform.

Why post a dataset on Kaggle?

As a scientist, you can publish the data and code from your latest experiment. Doing so enables other scientists in your field to reproduce the results in your paper and build on top of them. It will allow others to more deeply engage with your work and give it a wider audience.

As a hobbyist, you can publish data you’re passionate about on Kaggle and grow a community around the dataset that shares the same interest.

As a package author, you can release a dataset and code that showcases your package with executable documentation examples. Data scientists find it faster and easier to learn from examples instead of extensive API documentation.

As a student, you can use Kaggle to create your class projects. This saves you from needing to setup a local analytics environment, and starts building your data science portfolio. We recently revamped Kaggle profiles and the progression system to emphasize code and discussion, making them even more helpful for data science hiring managers.

As a data vendor, you can release a sample of your dataset. This is the best way to broadcast the potential of your data to the world’s largest community of data scientists.

As a company or nonprofit, you can publish data that you want our community to explore. At Kaggle, we release most of the publicly scrapable data on the site in an easily digestible form. We learned a lot about our own business from the kernels our community has created.

As a government, you can release the data your agencies collect on Kaggle. Rather than launching your datasets into an empty room, you can release them into a vibrant ecosystem and see the kind of insights the Kaggle community finds in on your data.

Publishing your data through Kaggle

Creating a new dataset

Sharing data through Kaggle is incredibly simple. Once you have your data prepared, it only takes minutes to publish it on Kaggle.

A single page for uploading your dataset

Getting visibility for your dataset

Publishing your data on Kaggle surfaces it on Kaggle Datasets as well as your own user profile. We encourage you to tweet your dataset and share it with those interested in it. We also regularly feature high quality and well-documented datasets to our community, both through our blog and newsletter.

Datasets you publish are listed on your Kaggle profile

Exploring the dataset

Creating a dataset on Kaggle immediately enables it in our reproducible data science environment, Kaggle Kernels. Any Kaggle user can then create a new script or notebook, enabling them to run R, Python, Julia, and potentially SQLite code on the data without a download. We maintain Docker containers for each language with all commonly used analytics packages already installed.

Interactively explore the data through scripts and notebooks

Downloading the dataset

Our community can also download the data and work with it locally. They’ll be able to download a zip archive of the entire dataset that we automatically create, or the files individually.

Users can download the entire dataset or individual files

Explore the code and insights our community creates

You’ll be able to see the code and insights that the community shares through Kaggle Kernels, and interact with the community through the discussion forums. The discussion forums also foster a community of collaborators to grow around the data itself, as they explore the data and answer each other’s questions on it.

We provide a place to publish insights on the data and discuss it

Dataset activity feed

You’ll be able to follow activity on the dataset through its feed. This surfaces new kernel runs, comments, and dataset versions.

The dataset activity feed is a quick way to see what users have recently done on the data

Dataset versions

Our interface makes creating and surfacing a new version of the dataset painless, both for you and the community. We preserve access to historic versions of the dataset for reproducibility, and we add an alert when you’re working with older versions of the data.

We autmoatically track and preserve access to previous dataset versions for reproducibility

Any questions?

We look forward to seeing what you publish and create on Kaggle! If you have any questions, comments, or issues, please post on our forums or email me at ben@kaggle.com.

Thanks to Anna Montoya, Anthony Goldbloom, Jeff Moser, Jerad Rose, Rand Seay, Meghan O’Connell, Stephen Merity, and Walter Reade for reading drafts

ps. Want to help flesh out our vision and bring it to life? We’re hiring full-stack software engineers