Working with Fisher-Rao method part6(Machine Learning 2024)

Monodeep Mukherjee
2 min readApr 10, 2024
  1. On Closed-Form Expressions for the Fisher-Rao Distance(arXiv)

Author : Henrique K. Miyamoto, Fábio C. C. Meneghetti, Julianna Pinele, Sueli I. R. Costa

Abstract : The Fisher-Rao distance is the geodesic distance between probability distributions in a statistical manifold equipped with the Fisher metric, which is a natural choice of Riemannian metric on such manifolds. It has recently been applied to supervised and unsupervised problems in machine learning, in various contexts. Finding closed-form expressions for the Fisher-Rao distance is generally a non-trivial task, and those are only available for a few families of probability distributions. In this survey, we collect examples of closed-form expressions for the Fisher-Rao distance of both discrete and continuous distributions, aiming to present them in a unified and accessible language. In doing so, we also: illustrate the relation between negative multinomial distributions and the hyperbolic model, include a few new examples, and write a few more in the standard form of elliptical distributions.

2. An Explicit Expansion of the Kullback-Leibler Divergence along its Fisher-Rao Gradient Flow(arXiv)

Author : Carles Domingo-Enrich, Aram-Alexandre Pooladian

Abstract : Let V∗:Rd→R be some (possibly non-convex) potential function, and consider the probability measure π∝e−V∗. When π exhibits multiple modes, it is known that sampling techniques based on Wasserstein gradient flows of the Kullback-Leibler (KL) divergence (e.g. Langevin Monte Carlo) suffer poorly in the rate of convergence, where the dynamics are unable to easily traverse between modes. In stark contrast, the work of Lu et al. (2019; 2022) has shown that the gradient flow of the KL with respect to the Fisher-Rao (FR) geometry exhibits a convergence rate to π is that \textit{independent} of the potential function. In this short note, we complement these existing results in the literature by providing an explicit expansion of KL(ρFRt∥π) in terms of e−t, where (ρFRt)t≥0 is the FR gradient flow of the KL divergence. In turn, we are able to provide a clean asymptotic convergence rate, where the burn-in time is guaranteed to be finite. Our proof is based on observing a similarity between FR gradient flows and simulated annealing with linear scaling, and facts about cumulant generating functions. We conclude with simple synthetic experiments that demonstrate our theoretical findings are indeed tight. Based on our numerics, we conjecture that the asymptotic rates of convergence for Wasserstein-Fisher-Rao gradient flows are possibly related to this expansion in some cases.

--

--

Monodeep Mukherjee

Universe Enthusiast. Writes about Computer Science, AI, Physics, Neuroscience and Technology,Front End and Backend Development