Transfer Learning via Blockchain
Code Samples Using the Dopamine Protocol
We all understand the importance of learning from the experiences of others. In our lives, we encounter many people that we trust and have been down the same road we are traveling. Although their path may not be the exact same as ours, many people have encountered and solved problems similar to the ones we might be facing. By using others’ experiences as a starting point, we accelerate our own progress. But can the same approach be applied to machine learning (ML)?
Below, we will show a simplified implementation based on the Dopamine network that applies similar concepts in ML.
In machine learning, the term “transfer learning” is used for a similar approach as the one presented above, where knowledge that was gained solving one problem is applied to a different problem. Not long ago, transfer learning was believed to be “the next driver of machine learning success” (Andrew Ng, NIPS 2016). I am not sure that transfer learning has delivered yet on that promise, at least on the commercial level.
One cause could be that unlike in humans, where there is an old and established social structure where we “transfer” learning to our friends, relatives and customers, in ML such a social structure is relatively new. Because this structure is still developing, there is a lack of trust among the different entities. The inevitable outcome is a structural market barrier, where most “transfer learning” cases are within the same research group or entity.
Google’s Cloud AutoML
However, the industry shows signs of a demand to break that transfer learning structural market barrier. Google recently released an alpha version of “Cloud AutoML”, starting with “AutoML Vision”, which is a transfer learning-based machine learning service that allows consumers to train custom vision models for their own use cases. The basic idea behind this technology is that Google provides a pretrained image classification model, allowing the consumer to upload their own images to train and evaluate a new model that is focused on their own classification needs.
The Dopamine protocol enables a similar transfer in a decentralized way, where small suppliers can also provide such services.
In our example below (code available on github), we show a “producer” that plugs a pre-trained service onto the Dopamine network. This service was trained on about 36,000 images of digit characters 1–6 (from the MNIST library). The service allows consumers to instantiate their own copy of the model, train, and evaluate it on their own data. A consumer that has only 42 samples of digits from classes that were unknown to the producer (7,8,9,0) uses this service to establish its own classification model:
The Dopamine layer takes care of matching both sides and passing rewards (more clear samples are available in our previous publications). We also show a comparison of two consumers that both have the same 42 training samples:
- The “self learning” consumer (blue below), which chooses to train a model on it’s own ,using the same technique the provider did.
- The “transfer learning” consumer (green below), who chooses to consume a transfer learning service from the Dopamine network.
It’s clear in the chart below that although both consumers have the same data and use the same deep learning architecture, the consumer that “learned from the experience of others” achieved higher accuracy.
The sample presented above is only a “toy sample” presenting possible interactions on the Dopamine network. Further enhancements, for example, could include transferring knowledge among large number of participants, and not only two as is presented here. Furthermore, a trust mechanism that is in the works, has not been presented yet as part of the Dopamine network. However, the basic concept outlined is much deeper: just as we try to learn from the experiences of others, decentralization technologies enable us to build AI systems that learn from the experiences of other AI systems. But unlike the human case, where every individual gets to learn from the experience of a few others (I’m currently ignoring evolution, which will be covered in a separate post), in the decentralization AI case the vast potential of transfer learning is unmatched.