Measuring Models’ Uncertainty: Conformal Prediction

For designing machine learning (ML) models as well as for monitoring them in production, uncertainty estimation on predictions is a critical asset. It helps identify suspicious samples during model training in addition to detecting out-of-distribution samples at inference time.

In this blog post, we introduce the conformal prediction framework. It provides ML practitioners with a simple and model-agnostic measure of uncertainty for every sample prediction with predictions regions.




the nitty gritty of data science by the experts @ dataiku

Recommended from Medium

Using the DIET classifier for intent classification in dialogue

Amazon Kendra — Machine Reading Comprehension

Segmenting an image: An Image Processing Introduction

Traditional Programming vs Machine Learning

Quantum Data Embeddings Circuit Design #2

Decision Tree in Machine Learning Part-2

Question Answering over Electronic Devices: A New Benchmark Dataset and a Multi-Task Learning based…

Build a RoBERTa Model from Scratch

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Léo Dreyfus-Schmidt

Léo Dreyfus-Schmidt

data scientist, mathematician, and something fun

More from Medium

Sample Size Planning for Interrupted Time Series Design in Health Care

Cracking the Secret Behind Time Series Forecasting

CLIQUE : Grid-Based Subspace Clustering

Identifying freight market turning points in real-time with Hidden Markov Models