Federated Learning — Privacy preserving Machine Learning

Secure Multiparty Computation to the Rescue

Debmalya Biswas
Darwin Edge AI

--

Federated Learning: based on the pic by Andrea Piacquadio from Pexels

Federated learning [1], also known as Collaborative Learning, or Privacy preserving Machine Learning, enables multiple entities who do not trust each other (fully), to collaborate in training a Machine Learning (ML) model on their combined dataset; without actually sharing data — addressing critical issues such as privacy, access rights and access to heterogeneous confidential data.

This is in contrast to traditional (centralized) ML techniques where local datasets (belonging to different entities) need to be first brought to a common location before model training. Its applications are spread over a number of industries including defense, telecommunications, healthcare, advertising [2] and Chatbots [3].

ML Attack Scenarios

Let us now focus on the ML related privacy risks [4, 5]. Fig. 1 illustrates the attack scenarios in a ML context. In this setting, there are mainly two broad categories of inference attacks: membership inference and property inference attacks. A membership inference attack refers to a basic privacy violation, where the attacker’s objective is to determine if a specific user data item was present in the training dataset. In property inference attacks, the attacker’s…

--

--