Published in


Q2 2021: Heads Down Building

Dear Raven Community,

This quarter we’ve focused heavily on the development and future growth of Raven Protocol. As our sharp-eyed GitHub subscribers saw, we restructured the framework to allow contributors to easily implement different algorithms.

The Raven Distribution Framework (RDF) is our suite of libraries to train machine learning/deep learning models in a decentralized and distributed manner. It can be used to perform statistical operations also. Most importantly, it facilitates the faster and cheaper training of ML/DL models on browser nodes. With RDF, we aim to build an ecosystem and accelerate the development of the fundamentals.

Let’s explore the different libraries/repositories as our codebase is growing:

  • RavCom: RavCom is a common library that contains various methods to interact with the databases like MySQL, Redis, PostgreSQL, and others. It is a common library used in most of our libraries.
  • RavOp: Op is the fundamental unit of RDF. RavOp is our library to work with ops. You can create ops, interact with ops, and create scalars and tensors. RavOp is a crucial building block of the framework. It can be used to write various algorithms, formulas, and mathematical calculations.
  • RavSock (Socket Server): RavSock is the second most crucial building block of the framework. It sits between the developer(who create ops and write algorithms) and the contributor who contribute the idle computing power. It facilitates the efficient distribution of ops and the efficient merging of results.
  • RavML: RavML is the machine learning library based on RavOp. It contains implementations of various machine learning algorithms like ordinary least squares, linear regression, logistic regression, KNN, Kmeans, Mini batch Kmeans, Decision Tree classifier, and Naive Bayes classifier. These algorithms can be used out of the box. We are constantly working on new algorithms and looking for enthusiasts to contribute.
  • RavViz: RavViz is the visualization library. While training models, it is crucial to understand how your model is doing. In RavViz, you can see: Progress of models, Ops and their values, Graphs and their ops.
  • RavJS: RavJS is the Javascript library to calculate various ops on the browser node. We support 100+ ops for now and constantly adding new ops. Currently, we are taking TensorflowJS’s help to calculate ops and very soon will be working on our own library. Most of the ops are based on TensorflowJS because of its support for thousands of operations.

Our heads down building was rewarded with an Ocean Protocol partnership!

It was humbling that an AI/ML industry titan like Ocean recognized the importance of the work we’re doing. We will become a Compute Provider in Ocean Compute-to-Data and will be publishing a range of algorithms on Ocean Market, from Machine Learning to Federated Analytics and of course the holy grail — Federated Learning.

In Ocean, a Compute-to-Data infrastructure is set up as a Kubernetes (K8s) cluster e.g. on AWS or Azure in the background. This Kubernetes cluster is responsible for running the actual compute jobs, out of sight for marketplace clients and end users. While this is an incredible feat in itself, users and Data Providers may want an alternative option in the Compute Providers they choose to approve. The spirit of decentralization may be a philosophical choice for some, but a strict requirement for others. Raven Protocol provides the decentralized option when choosing a Compute Provider.

On top of that, Raven provides an additional layer of privacy for Ocean Compute-to-Data. We mentioned that we will be publishing Federated Learning algorithms. A neural network is randomly initialized. Weight updates are computed next to the data itself in a data silo and then sent to the neural network. This is repeated in data silo #1, data silo #2, data silo #3, and so on. A neural network gets trained across many data silos without data leaving the premises of each respective silo. The Raven Distribution Framework enables this in Compute-to-Data.

Towards Growth

We know that at the heart of protocol growth is the community. We need the best AI researchers, the best ML engineers, and the brightest minds to support this piece of decentralized infrastructure. This enables us to research, develop, and publish more AI/ML algorithms. This pushes us closer and closer to our goal of being the decentralized option for AI/ML training.

Raven Protocol: Q2 2019 Tech and Community Update:

Raven Protocol: Q3 2019 Tech project development Update:

Raven Protocol: Q4 2019 Tech project development Update:

Raven Protocol: Q1 2020 Tech project development Update:

Raven Protocol: Q2 2020 Development and Community Update:

Raven Protocol: Q3 2020 Development and Community Update:

Raven Protocol: Q4 2020 Development and Community Update:

Raven Protocol: Q1 2021 Development and Community Update:

Raven Protocol Project Review:

Raven Protocol White Paper:

Official Email Address: founders@ravenprotocol.com
Official Website Link: http://www.RavenProtocol.com
Official Announcement Channel: https://t.me/raven_announcements
Official Telegram Group: https://t.me/ravenprotocol
Official Twitter: https://twitter.com/raven_protocol
Official Medium: https://medium.com/ravenprotocol
Official LinkedIn: https://linkedin.com/company/ravenprotocol
Official Github: https://www.github.com/ravenprotocol
Official Substack: https://ravenprotocol.substack.com
Official #DeAI Discord: https://discord.gg/WF47ckd




Decentralized and Distributed Deep Learning

Recommended from Medium

Project 1: Aligning and compositing the images of the Provoking-Gorskii photo collection

Looking at Paddle Quantum, a quantum computation included in deep learning framework.

Creating a Simple Rule-Based Chatbot with Python

Time Series Forecasting with Stacked Machine Learning Models

Some Basic torch functions that are used frequently in Notebooks

moving cpu to gpu

RL Basics and simple K-armed bandit problem

Natural Language Processing- Jigsaw Unintended Bias in Toxicity Classification Case Study

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Raven Protocol

Raven Protocol

www.RavenProtocol.com is a decentralized and distributed deep-learning training protocol. Providing cost-efficient and faster training of deep neural networks.

More from Medium

Open Source and Interporality in the Metaverse

Announcing Support for Federated Analytics in Raven Distribution Framework (RDF)

January 2022: WildMeta update