GDPR and Machine Learning Black Boxes

General Data Privacy Regulation (GDPR) in Europe has a provision that any AI or ML algorithm that is used for “meaningful” decision making needs to be transparent and explainable.

“Meaningful” is a word that regulators use when they expect imprecise and evolving interpretations. The rule is meant to prevent things like loan applications or criminal sentencing decisions from becoming a black box. Blindly training a machine learning model on past history or past transactions can perpetuate unwanted patterns of bias or prejudice that may exist in the data.

An interesting strategy that some companies are using to comply with the regulation is to run a simpler, more explainable model side-by-side with any deep neural nets that could become black boxes.

For example, a traditional linear regression and a deep neural net trained on the same data might agree 90% of the time, in which case you could use the linear regression to hedge against any GDPR inquiries; and the other 10% of the time, you can use the neural net in combination with some sort of audit or qualitative check.

Like what you read? Give Dave Costenaro a round of applause.

From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.