Hello World — Raven Protocol

Raven Protocol
RavenProtocol
Published in
5 min readMay 3, 2018

--

First truly distributed and decentralized Deep Learning training protocol

Even after the massive call for democratisation and decentralisation in AI and ML, development in these areas, in their true sense have been limited. Developers and enterprises restricted themselves from the extensive use of deep learning to bring about a cognitive cycle of constant learning and improvement in their products and user experience. The mere lack of resources and the inability to scale economically, left them with not many choices. These factors were no barrier for large tech corporations that have access to an ocean of data and also possessed sheer volume of resource power to perform compute-intensive tasks to customize their AI developments, with ease. In the meantime, they proposed ‘democratisation’ of AI that compensated the technical debt that arose for them, along with professing that it could aid the general AI community. The AI community was thus, granted access to huge GPU clusters and introduced them to Machine Learning (ML) techniques with open-source frameworks and MOOCs (Online Courses).

But, the underlying issue went unnoticed amongst the furor and the sudden increased demand for advanced AI solutions — Economical Scaling of Compute Power.

The Struggles That Remained

The development process to train AI/ML models could take weeks to even months to complete when run on basic computers with limited capacity. The cost factor to acquire better compute chips (GPUs) did not make the process any easier. Intensive and frequent use of speedy compute resources to calculate and update gradients in different neurons of a deep neural network, inferred from the training data, usually cost beyond what small to medium scale developers and companies could encompass. Cloud computing aided to an extent, but still, the cost of acquiring resources through them is not affordable for tasks revolving AI development. The usual spend ranges from $2.50 — $17 USD per hour on any given cloud platform.

The simple solution to this inaccessibility now lies in crowdsourcing them. Crowdsourcing has long been disrupting the existing markets as the David (read Ubers and Airbnbs of the world) amongst various Goliaths making those services cheaper and viable. The world of AI has been witnessing it too. From crowdsourcing the development on Kaggle to gathering data using Ocean Protocol, the AI ecosystem is welcoming the new approaches. Raven aims to take the torch further by building one of the first truly decentralized and distributed deep learning training system, by utilizing the idle compute resources and using them to train the deep learning models economically.

AI enthusiasts and entrepreneurs wanting to innovate through AI can now fill the gap and could reap the benefits of the technology, for innovation with their own and crowdsourced resources. Many from the AI community, viz Singularity.Net, Ocean Protocol, OpenMined, Deep Brain Chain, and more have thus built platforms where compute resources and data could be shared inside the secure Blockchain to fuel ML/DL algorithms transforming numerous business models.

Raven Protocol enables decentralised, incentivised and secure transactions to train an ML/DL model successfully.

A [Decentralised & Distributed] Training of Deep Neural Networks

Neural Networks have been around for several decades and has evolved to Deep Neural Networks(DNNs) which triggered tremendous success in different fields of application, notably in pattern recognition.

The theoretical constraints in this training method lie in how a DNN architecture is centrally trained on a single node and then obtained by different servers for their application or, in it getting split among several servers to be then trained. Needless to say, the computation capacity to do such training is huge, whereby limiting them only to the powerful GPUs and servers. Raven approaches this problem by facilitating dynamic allocation of nodes to the participating devices in the network. Thus, eliminating any added dependencies on the host nodes and significantly reducing the compute power required in-house.

Where Raven Protocol differs from similar institutions is how it tackles latency from asynchronous updation and parallelization of data shards. The latency for which there was no previous solution in that training models through these ‘others’, can consume a major chunk of time, up to several weeks to months. This is irrespective of availability of the massive computational power that is needed to perform the training. Suppose the parallelization is achieved, even then they would be confined to users with systems that can handle enormous power. This factor eliminates small scale users from accessing the platform.

Raven is able to successfully build a dynamic graph for such vast number of small synchronous calculations that are required to train a model.

Incentivised Sharing of Idle Compute Power

The cost of acquiring specific powerful CPUs and GPUs to train DNNs becomes minimal through Raven Protocol which allows for sharing of resources from idle compute power of individual contributors’ devices. The concept of sharing idle computing power to facilitate training saves the enormous expense involved. In return the contributors are compensated/rewarded with Raven tokens (RAV).

Incentivisation happen in two steps of simple verification via smart contracts in the Ethereum Blockchain.

What’s Ahead For Us

Economical AI Scaling & Active Experimentation

With next to nothing CAPEX, Raven will offer compute services at a far cheaper price than any provider in the market. With zero dependency at the contributor’s/host’s nodes, resource acquisition becomes efficient and faster. Companies using Raven will be able to perform live experiments leveraging AI and scale faster with the market demand without putting a hole in their pockets.

Unified Ecosystem

Raven realised the ensuing inability to comfortably interchange tokens of existent partners in the ecosystem. With the ease of use in mind, the RAV token is made inter-exchangeable inside the Raven ecosystem with other partner services. Anyone might still be able to use partner service’s tokens inside Raven. This allows an active participation from other AI communities built on blockchain.

A majority of the human crowd are still oblivious of the extensive struggle that a small section of the AI community are facing, to make AI an easy and accessible affair for all. This stems from the realisation that AI is here and will become part of our lives in ways that we may not yet fathom. Regular AI companies or companies seeking to implement AI into their systems strive to bring about new ways to improve life with AI are finding themselves at a crippled stage to fully explore their ideas. Raven aims to help such set of individuals and companies to fully exploit the potential of AI, economically.

--

--

Raven Protocol
RavenProtocol

www.RavenProtocol.com is a decentralized and distributed deep-learning training protocol. Providing cost-efficient and faster training of deep neural networks.