Dopamine.ai : Will Blockchain Unchain AI?
Are we at a turning point in data processing history?
Quick quiz: of the 5 largest US companies by market capitalization, how many are technology firms?
If you answered “C”, you probably understand how centralized the data processing industry is. But what causes this centralization? Is this centralization a direct result of the fact that most of the “Tech Giants” have unlimited access to our personal data? My personal journey tells a different story: The rise and domination of “Tech Giants” was inevitable given the nature of available technology.
But now, new technologies are challenging the old rules.
Evolution of Soft & Distributed Data Processing
I was about 11 years old when I received my new Commodore 64 computer, feeling on top of the world after drawing my first circle on the screen. I typed my days and nights away on the keyboard as though I were in a technological wonderland. One day I read somewhere about code functions. “What an absurdity!”, I thought, “why the hassle of writing a function if you can just write a goto?” Not much later, after I could not update my own code, I realized I was wrong. A key factor of code (and of data processing in general) is its ability to adapt to changes; or in other words, its softness.
A different dimension of improving data processing was based on the ability to use code that others had written. Back in the 80s, the focus was on operating systems and DOS was the big winner.
These two principles, of “softness” and “collaboration”, continued evolving over the years, providing solutions to more and more complex problems. But while the softness principle (represented by the circles in the diagram below) brought us to new heights with its ability to solve complex problems (e.g deep learning), the collaboration principle (represented by the stars in the diagram below) did not, opening a gap between the marginal contributions of each principle to our ability to solve complex issues.
This gap created a situation where, in order to solve the most difficult problems, an individual needed both control over huge teams collaborating together and control over massive amounts of data. The only entities that had this ability are what we now call the “Tech Giants”. In current market structure, smaller entities simply cannot meaningfully contribute to solving highly complex problems (unless a Tech Giant eventually purchases the entity). Therefore, even if Tech Giants didn’t currently exist, somebody would have needed to create them eventually. But is relying on massive corporations to solve our challenging problems the right solution for us as a society? Do we have to create bigger and bigger Tech Giants in order to solve more complex problems? Can a single entity be big enough to bring the prosperity we want to all humankind?
Some of you might say we are missing the tools to take the next collaborative step.
I propose that nothing is missing: all of the pieces of the puzzle are readily available to us. Putting the pieces together is inevitable and will create a new, decentralized collaborative processing network. Most likely, several groups are working on this solution as I type these words, just as we’re currently working on it in our Dopamine.ai labs. Here are the main pieces of the puzzle I mentioned:
5 Essential Components for a “Decentralized Collaborative Processing Network” (DCPN):
1: Data Processing Plasticity
Over the years, I have enjoyed both the world of managing the development of software projects and the world of building AI solutions. With the conviction that the ultimate solution for soft data processing lies in the structure of neural networks, I decided to head down the neuroscience PhD track. Luckily, I began my studies in the Golden Age of neural networks that accompanied the rise of deep learning methods.
Though there are significant differences between biological and artificial neural networks, I find the existing lines of similarity very interesting and useful. Plasticity in biological systems allows neurons in the brain to adjust their activities in response to injuries or changes in their environment. A similar plasticity is also achievable in deep learning systems, allowing the transfer of a certain model to a different environment. Such a plasticity allows the collaboration of different entities, each one with its own data processing model, to come up with a better, synergic model.
2: Good Security & Privacy tools
I spent some of the mid 90s leading a cyber-crypto-security team. Back then, one of our goals was to defend from the malicious controls downloaded with HTML pages. At the time, the main controls were Microsoft’s ActiveX controls and Sun’s Java applets. The winner between the two was clear, and the distinguishing factor was security: Java applets had much better security mechanisms compared to competitors.
Today, it is truth universally acknowledged that security comes first. There are various methods to protect the privacy of end users.
3: Powerful AI libraries
Unlike when I was a teenager and playing with (tiny) neural networks was considered “weird”, today’s deep learning has been brought to the masses. Libraries like Tensorflow, PyTorch, Thano & MXNet, plus various GPU providers have made it extremely easy to develop neural network solutions. Despite these advancements, it is still not very easy for developers to collaborate.
4: Decentralized computation
Before diving into the Dopamine.ai project, I was a member of the algorithmic trading industry, focusing most recently on Bitcoin and cryptocurrency. Decentralized computation environments like Ethereum, based on Blockchain technology, are the most important piece in the puzzle. The ability for different entities to have a “shared memory” that is not dependent on any other entity opens up the door for collaborative computing.
5: Frictionless Rewarding System
One of the best characteristics of the algo trading world (besides the brilliant people there, some of whom joined the Dopamine.ai team), is how directly rewarding the work is. It’s very easy to measure the contribution of each trading group to the Firm’s PnL (though if you ask each individual for what he believes he contributed, the total percentage can easily reach 1000% 😊). This reward system is frictionless: no sales department, no marketing department — every cent counts. In such a system, the various teams’ motivation is always high. Blockchain technology now brings similar principles to B2B, creating a situation where the same principles that enabled capitalism to beat communism now enable a DCPN to beat the old “Tech Giants” structure I discussed earlier.
From a CPN to a DCPN named Dopamine.ai
While managing several AI-based algo trading groups, my teams and I recognized an opportunity to strengthen all of the groups’ data processing capabilities simultaneously. Each trading group specialized in its own market coverage and strategy, and each group had data and AI models that could help the other more broadly.
However, although each group certainly had the ability to help the others, the teams did not have the correct incentivization structure to encourage this collaboration. None of the groups wanted to risk exposing their IP, and unifying the groups didn’t make sense either because we didn’t want any group to lose its granular focus.
As a result of this conundrum, we developed a “Collaborative Processing Network” that enabled the different teams to collaborate and reward each other as is done in the free market. The success was huge. Recently, when Ethereum became available, as with all pieces of the puzzle mentioned above, it seemed like the right time to implement the first DCPN in order to enable global computation. We have decided to call this DCPN “Dopamine”, as biological data processing mechanisms are the main inspiration for this concept. Dopamine, of course, is relevant because it is the main neurotransmitter involved in the brain’s reward system.
Please share your thoughts! If you want further information or updates on our project, please subscribe on our site.