[Notes] Simple and Principled Uncertainty Estimation with Deterministic Deep Learning via Distance Awareness

Joanna
Geek Culture
Published in
5 min readAug 2, 2021

--

Original paper: https://arxiv.org/pdf/2006.10108.pdf

1. Introduction

Previous uncertainty estimation methods (Deep Ensemble, MC Dropout) have two flaws:

  • They require multiple runs of the neural network.
  • The decision boundary is linear, i.e. the uncertainty is mainly located around the decision boundary.

In this paper, the authors propose a single deterministic model that only requires one pass training. Also, the uncertainty is not restricted to the decision boundary.

  • bright yellow: uncertainty 1.0 (large) , dark blue: uncertainty 0.0 (small).
  • (a), (f): Using GP classification, uncertainty is small near the training data, large when the test sample is far away from the training data domain.
  • (b), (c), (g), (h): Uncertianty is large near decision boundary.
  • (d), (i): still have the same problem
  • (e), (j): Works similar to GP in the low dimension

1.1 Problem setup

The uncertainty is characterized by the predictive distribution, which can be decomposed into two cases…

--

--

Joanna
Joanna

Written by Joanna

Data Product @ TikTok | Adjunct Professor of Data Science | Python, R, ML, DL