Secure Collaborative XGBoost on Encrypted Data
A library for multi-party training and inference of XGBoost models using secure enclaves
We recently released Secure XGBoost, a library that enables collaborative XGBoost training and inference on encrypted data. Secure XGBoost is part of the umbrella MC² project, under which we are working on a variety of tools for privacy-preserving machine learning.
In particular, Secure XGBoost facilitates secure collaborative learning — where mutually distrustful data owners can jointly train a model on their data, but without revealing their data to each other. Secure collaborative learning is a powerful paradigm that could be the key to unlocking more resilient and robust models.
We’ve been partnering with some teams in industry, including Scotiabank and Ant Financial, to deploy Secure XGBoost for efforts towards anti-money laundering and fraud detection.
This work was supported in part by the NSF CISE Expeditions Award CCF-1730628, and gifts from the Sloan Foundation, Bakar Program, Alibaba, Amazon Web Services, Ant Financial, Capital One, Ericsson, Facebook, Futurewei, Google, Intel, Microsoft, Nvidia, Scotiabank, Splunk, and VMware.