TL;DR: Using TF Encrypted, we were able to detect skin cancer on encrypted images. Within the last year, we reduced the runtime from 24 hours to 36 seconds. This work applies generally to privacy-preserving computer vision. The following post describes our journey to achieve this result and our roadmap to sub-second performance. You can find the code here.
Democratizing AI in Healthcare While Maintaining Privacy
If there is an area where machine learning will soon significantly improve our lives, it’s healthcare. Researchers have made several breakthroughs in applications involving medical imaging over the past three years. Machine learning systems that leverage deep learning techniques are now capable of diagnosing diabetic retinopathy, skin cancer, and breast cancer. As of 2016, more than 415 million diabetics worldwide were at risk of diabetic retinopathy (which is the fastest growing cause of blindness). Approximately 3,5 million people in the U.S. are diagnosed with skin cancer every year. Over 232,000 women were diagnosed with breast cancer in the U.S. in 2015. Unfortunately, one of the biggest challenges is gaining access to specialists early enough to be treated effectively; this is particularly problematic in remote areas. Machine learning could democratize access to the best medical imaging diagnostics across the globe.
However, machine learning and privacy have traditionally been at odds. This creates a problem for data scientists building and deploying machine-learning-based healthcare systems as a service. Deep learning models need to have access to very large datasets to achieve state-of-the-art performance. Unfortunately, this data is often siloed among several organizations because of privacy concerns and liability risks. It is also very difficult to maintain control over personal data within these machine learning services. In short, in order to benefit from these powerful diagnostics, you have to share your data.
At Dropout Labs, we strongly believe that encrypted machine learning is the solution: enabling models to be trained and queried while the model, inputs, and outputs are encrypted. At no point is raw data shared or revealed.
Encrypted Deep Learning — a Technical Challenge
To build these encrypted deep learning systems, we use secure multi-party computation (MPC), which is a subfield of cryptography. MPC brings multiple parties together to jointly compute a function while keeping the input and output private. The primitive that powers MPC is the ability to split a piece of data into multiple encoded parts, known as secret shares. On their own, the shares reveal nothing about the original data. However, if two parties perform the same operation on a set of shares and then recombine them, it is as if that operation was performed on the original data.
With MPC, you can easily add and multiply tensors. With these two basic operations you can begin building encrypted deep learning models. However, the challenge here is that MPC involves multiple parties (e.g., several servers), which introduces some networking overhead. MPC also requires large integers greater than 64 bits, which introduces some computational overhead as well. You can learn more about MPC in this blogpost.
The Trials and Tribulations
In Spring 2018, we built our encrypted deep learning library from scratch in Go. We were excited to see if we could provide private predictions to detect skin cancer so we trained a convolutional model (similar to VGG16) on skin lesion images. Our secure computation protocol, called Pond, didn’t support comparisons at that time, as it is a very expensive operation with MPC. So we replaced the Max-Pooling layer with an Average-Pooling layer, and the exact ReLU was approximated by a polynomial. We replaced the last three fully connected (FC) layers with a single smaller FC layer. We found that this had a negligible impact on model accuracy.
The final VGG16 model included about 15 million parameters with an AUC of 0.89 on the testing set. Believe it or not, it took more than 24 hours to get a private prediction. Our implementation in Go was not optimized enough to compensate the networking and computation overhead mentioned previously. It became obvious improving this would require us to build a stack on top of existing libraries, which meant a significant investment and lead us consider another approach.
From 24 Hours to 8 Minutes in TensorFlow
In parallel, our colleague Morten was investigating if TensorFlow could be a good fit for MPC, and for secure computation more generally. After experimenting with a proof of concept, we quickly saw the performance gain by leveraging a modern distributed computation platform such as TensorFlow. The first time we tried to run a private prediction with the variant of VGG16 model, it took 8 minutes. It was just the beginning of a long series of performance improvements.
From 8 Minutes to 1 Minute 4 Seconds with SecureNN
Very quickly, we were able to decrease the runtime for this same model to around a minute. To achieve this result, we mainly cleaned up the code and re-used triples (masking tensors required by matrix multiplication in MPC). Throwing more CPUs at this problem also made a big difference. Tensorflow’s ability to parallelize computation and prioritize the right operation to save on networking drastically decreased the runtime. However, we discovered we were getting low accuracy for large models because of the approximated ReLUs.
The next step was to implement the SecureNN protocol. This protocol broke speed records for encrypted machine learning with MPC in 2018. One of the benefits of this protocol is that it supports comparison operations, so we could use a VGG16 model architecture out of the box with Max-Pooling and exact ReLU layer. We still replaced the last three fully-connected layers with a single small fully-connected layer (~15 million parameters).
This meant data scientists could leverage existing pre-trained models on ImageNet, then perform transfer learning to achieve better accuracy for their tasks. We were also able to make it work with int 64 tensors instead of very large integers. We ran our private predictions again and were able to achieve 1 minute 4 seconds with 96 vCPUs on each machine on GCP. Even though it was a bit slower than the POND protocol, by using SecureNN we maintained the model accuracy. It was a concrete example of the trade-off between performance and accuracy, which is part of the challenge when using encrypted machine learning.
55 Seconds with SecureNN
Seeding the triples enabled us to run this model in under a minute. We plan to cover this concept in greater detail in a future post, but the short and sweet is that you need a triple for every neuron in your model, and we were able to replace these millions of triples with a single seed that can generate the same set of numbers on each party. This greatly reduces the communication overhead.
36 Seconds with SecureNN, While Tweaking the Model.
In the previous sections, we described many secure computation optimizations. However, you can also play with different neural network architectures to better fit MPC constraints. By just replacing the Max-Pooling layer by an Average-Pooling layer, which involves only matrix multiplications instead of comparisons, you can query the model in 36 seconds.
The Path to Sub-Second
The open source library TF Encrypted, built on top of TensorFlow, allowed us to easily run many experiments and scale large models. We look forward to experimenting over the next couple of months to potentially query this type of model in sub-seconds. Here are several potential avenues:
- Improve the polynomial approximation approach
- Tweak neural network architectures
- Network pruning
- Continue to integrate cutting-edge secure computation protocols and introduce engineering optimizations
We will continue updating with our progress. In addition, we encourage you to experiment with TF Encrypted to continue pushing the boundaries of this technology and we look forward to hearing your feedback about how we can make encrypted deep learning more accessible.
We truly hope encrypted deep learning can contribute to democratizing access to AI healthcare while maintaining privacy. Making encrypted deep learning faster will improve the diagnostic power of these models by enabling access to a gigantic source of siloed data. It will also help to provide diagnoses to more patients, at a lower cost and without compromising personal data.
About Dropout Labs
We’re a team of machine learning engineers, software engineers, and cryptographers spread across the United States, France, and Canada. We’re working on secure computation to enable training, validation, and prediction over encrypted data. We see a near future where individuals and organizations will maintain control over their data, while still benefiting from cloud-based machine intelligence.
If you’re passionate about data privacy and AI, we’d love to hear from you.