Bringing privacy enabled collaborative AI to the edge

Ritabrata Maiti
Intel Student Ambassadors
3 min readJul 30, 2019

Artificial Intelligence which can learn collaboratively, commonly known as federated learning systems, was popularised by the publication of Google’s experiment in textual recommendation in the Google AI Blog. This system essentially allowed similar neural networks to combine in a manner which resulted in a unified model which performs much better than any of the individual networks. This has the advantage of ensuring a certain degree of privacy. By enabling training to be performed locally, personal data is never transferred to an external server. Confidential stuff stays confidential.

Credits: Google AI Blog

Rather than aggregating data at a central location, neural networks are trained for each user. (A user refers to a person who is benefitting from the service of the deep learning model, such as someone who uses AI-based analytics or owns an autonomous vehicle). The trained models are then sent to a central server which orchestrates the aggregation of these models to form a single model which is then pushed back to the users. The result is that now the users can benefit from a model which performs better than any of the individual locally trained model without giving up their privacy.

Credits: The Google AI blog

While avoiding the aggregation of user data at a central location provides a layer of security and privacy, the new model formed by combining all the locally trained models does not perform as well as a single model trained on an aggregated dataset. At present, there exist multiple techniques for combining multiple models, with federated averaging being one of the most opted for technique. This technique is also utilized by Tensorflow’s federated learning library.

While avoiding direct transmission of data provides a degree of security and privacy, neural networks can still be reverse-engineered and confidential training data can be extracted from the model. One way of preventing this is by implementing differential privacy. The nuances of differential privacy are quite hard to explain via plain text, so I will let the video below do the heavy lifting :) Bonus: This video also touches upon adversarial attacks.

Now that I have addressed the issues of enabling privacy in a collaborative learning environment, let us talk about implementing such systems on the edge. There have been many new advancements in fitting a complicated model within IoT based edge devices, with the most effective technique being quantization. This video sums up neural network quantization with Tensorflow quite well, so I would definitely recommend watching it.

Tensorflow has its own library for quantizing neural networks, called tflite, so do head on over to the page(https://www.tensorflow.org/lite) if you want to implement your own quantized models!

About me (https://www.linkedin.com/in/ritabratamaiti/): I am a computer science student and have completed multiple research projects involving practical machine learning. I have worked as a research intern at the Nanyang Technological University. I am also an Intel® Student Ambassador.

--

--