For designing machine learning (ML) models as well as for monitoring them in production, uncertainty estimation on predictions is a critical asset. It helps identify suspicious samples during model training in addition to detecting out-of-distribution samples at inference time.
In this blog post, we introduce the conformal prediction framework. It provides ML practitioners with a simple and model-agnostic measure of uncertainty for every sample prediction with predictions regions.
We validate this measure of uncertainty by computing the change of error rate for samples with large prediction regions compared to all samples by benchmarking it on a collection of datasets. …
This is Data from the Trenches, and we’re here to take you down into the nitty gritty of the data science world. So welcome. We hope you’re ready to get your hands dirty (metaphorically speaking).
Who is down here in the trenches, you ask (i.e., why should you listen to what we have to say)? We are a group of data scientists with diverse backgrounds (researchers, entrepreneurs, engineers, bankers — the list goes on). We want to contribute to the data science community one post at a time by really delving deep into the details.
Our range will be sweeping…
data scientist, mathematician, and something fun