How far have we come with Federated Learning part3(Machine Learning)

Monodeep Mukherjee
3 min readNov 27, 2022
Photo by ThisisEngineering RAEng on Unsplash

Getting Started with Federated Learning

  1. Exploiting Personalized Invariance for Better Out-of-distribution Generalization in FederatedLearning(arXiv)

Author : Xueyang Tang, Song Guo, Jie Zhang

Abstract : Recently, data heterogeneity among the training datasets on the local clients (a.k.a., Non-IID data) has attracted intense interest in Federated Learning (FL), and many personalized federated learning methods have been proposed to handle it. However, the distribution shift between the training dataset and testing dataset on each client is never considered in FL, despite it being general in real-world scenarios. We notice that the distribution shift (a.k.a., out-of-distribution generalization) problem under Non-IID federated setting becomes rather challenging due to the entanglement between personalized and spurious information. To tackle the above problem, we elaborate a general dual-regularized learning framework to explore the personalized invariance, compared with the exsiting personalized federated learning methods which are regularized by a single baseline (usually the global model). Utilizing the personalized invariant features, the developed personalized models can efficiently exploit the most relevant information and meanwhile eliminate spurious information so as to enhance the out-of-distribution generalization performance for each client. Both the theoretical analysis on convergence and OOD generalization performance and the results of extensive experiments demonstrate the superiority of our method over the existing federated learning and invariant learning methods, in diverse out-of-distribution and Non-IID data cases.

2. Gradient-Free Federated Learning Methods with l1 and l2-Randomization for Non-Smooth Convex Stochastic Optimization Problems(arXiv)

Author : Aleksandr Lobanov, Belal Alashqar, Darina Dvinskikh, Alexander Gasnikov

Abstract : This paper studies non-smooth problems of convex stochastic optimization. Using the smoothing technique based on the replacement of the function value at the considered point by the averaged function value over a ball (in l1-norm or l2-norm) of small radius with the center in this point, the original problem is reduced to a smooth problem (whose Lipschitz constant of the gradient is inversely proportional to the radius of the ball). An important property of the smoothing used is the possibility to calculate an unbiased estimation of the gradient of a smoothed function based only on realizations of the original function. The obtained smooth stochastic optimization problem is proposed to be solved in a distributed federated learning architecture (the problem is solved in parallel: nodes make local steps, e.g. stochastic gradient descent, then they communicate — all with all, then all this is repeated). The goal of this paper is to build on the current advances in gradient-free non-smooth optimization and in feild of federated learning, gradient-free methods for solving non-smooth stochastic optimization problems in federated learning architecture.

3.Personalized Federated Learning with Hidden Information on Personalized Prior (arXiv)

Author : Mingjia Shi, Yuhao Zhou, Qing Ye, Jiancheng Lv

Abstract : Federated learning (FL for simplification) is a distributed machine learning technique that utilizes global servers and collaborative clients to achieve privacy-preserving global model training without direct data sharing. However, heterogeneous data problem, as one of FL’s main problems, makes it difficult for the global model to perform effectively on each client’s local data. Thus, personalized federated learning (PFL for simplification) aims to improve the performance of the model on local data as much as possible. Bayesian learning, where the parameters of the model are seen as random variables with a prior assumption, is a feasible solution to the heterogeneous data problem due to the tendency that the more local data the model use, the more it focuses on the local data, otherwise focuses on the prior. When Bayesian learning is applied to PFL, the global model provides global knowledge as a prior to the local training process. In this paper, we employ Bayesian learning to model PFL by assuming a prior in the scaled exponential family, and therefore propose pFedBreD, a framework to solve the problem we model using Bregman divergence regularization. Empirically, our experiments show that, under the prior assumption of the spherical Gaussian and the first order strategy of mean selection, our proposal significantly outcompetes other PFL algorithms on multiple public benchmarks

--

--

Monodeep Mukherjee

Universe Enthusiast. Writes about Computer Science, AI, Physics, Neuroscience and Technology,Front End and Backend Development