**Sentient Proof-of-Compute Introduction**

**We are very excited to unveil some of the technical research that we’ve been working on this year, we have made great progress towards getting to a decentralized artificial intelligence network algorithm. This post is intended to serve as a beginning of the larger developer community we aim to support. If you’re interested in contributing to the Sentient project, please reach out to us by email, ****research@sentient.org**

Our R&D team has been working on the Proof-of-Compute consensus protocol since earlier this year, and are now getting close to a design that looks like a solution we can use in the blockchain setting. At this stage, we’d like to open the design paper and the initial code to a select group of mathematicians, cryptographers and computer scientists to review, comment and try to find vulnerabilities in the design and the initial implementation.

Below, we provide a summary of the process we went through over the past months, findings along the way, problems we’ve encountered to date, and potential solutions.

**Motivation and Goals**

We aim to achieve the following:

- Use the combined network resources for useful computations that can help advance the development of artificial intelligence technologies.
- Give network participants controls over the privacy of their data.
- Give network participants the financial incentive to contribute compute resources and datasets to the marketplace.

**Network Layers**

Our current network infrastructure design includes the following layers:

- Financial Incentive layer: SEN currency.
- Data Storage layer: the initial Sentient network is based on Sia, however the storage capabilities of Sia do not fit our use cases, so this needs a substantial (or complete) rework. Also, there are specific considerations for data privacy that need addressing.
- Distributed Compute layer: the subsystem responsible for the artificial intelligence technologies (federated machine learning, classification, etc.).

We started researching the idea of the decentralized artificial intelligence system with the exploration of the federated machine learning paper, made public by Google in 2017, applied within a decentralized proof-of-work blockchain setting. Google’s research was focused on applications of the approach on the mobile devices running Android OS, but the adaptation of this principle in a decentralized miner-powered network looked promising. Over the course of 2018 we focused on modelling this new execution setting for training deep learning by using distributed ledger technologies and laying the foundation for the supporting protocol. There are a few assumptions and prerequisites, as well as notation used, stated below:

**Notation**

**Prerequisites**

- A distributed ledger network secured via Proof-of-Work.
- Smart Contract or scripting capabilities, not necessarily Turing-complete.
- A blockchain explorer API exposed by each
running the mining software and participating in the consensus protocol.*full node*

**Assumptions**

- Global model weights, as well as intermediate models, can be stored as part of a data field in a transaction.
- Clients, as well as
can produce strong cryptographic proofs of correct computation, on the correct datasets. Furthermore, these proofs of computation can be broadcast as part of a transaction and can be publicly verified by any node in reasonable time.*full nodes* - Clients can be trusted not to cheat the system and use real data for training locally.

**Scenario**

**Verification Machinery**

As part of the algorithm design (Step 7) we needed to answer the question: “How can we verify computations without re-executing them?”. One of the promising directions for the solution to this problem were Multi-QAPs (Quadratic Arithmetic Programs), introduced by the Microsoft Research team: *Geppetto: Versatile Verifiable Computation**,* based on* **Pinocchio: Nearly** **Practical Verifiable Computation**. *In August 2018* *we began the deeper feasibility study for verifiable computation with machine learning and the attempt to implement the suggested theory. The goal of the investigation was to determine whether the Geppetto verifiable computation (VC) system would be suitable for use in a blockchain-based federated learning setting.

Analysis was performed using the Geppetto source code *(**https://github.com/corda/msr-vc/tree/master/geppetto/code/compiler**)*, infrastructure was set up to compile, keygen, prove, and verify various example problems; at first to learn the details of the Geppetto implementation, and then to study the scaling behaviour of Geppetto.

To study the potential application of Geppetto to machine learning problems, a very basic neural network implementation was developed. There are various limitations of the current Geppetto implementation, which limit the scope of programs being compiled (e.g.: only positive integers can be used in a program, all loops must be of a fixed size, etc.). Therefore, our implementation was a conservative effort of an implementation of neural net backpropagation.

The problem space we decided to study was classifying the MNIST dataset *(**http://yann.lecun.com/exdb/mnist/**)*, which is generally regarded as the “Hello, World!” dataset of deep learning.

The case we used for analysis is based on the introductory TensorFlow example *(**https://www.tensorflow.org/tutorials/**)*, which uses a simple two-layer dense neural net to attempt to classify MNIST (this network architecture referred to henceforth as “naive MNIST”). This network is about as simple of a neural network that there is; and, as noted in the tutorial, is only able to achieve about 88% accuracy on the MNIST dataset (hence “naive”). We implemented a simplified version of this in C, with parameters to control the sizes of the training sets and various layers. Geppetto was, in fact, unable to compile the simplified example as-is; we had to use smaller sizes for the input/hidden layers to get anything to compile and test using Geppetto. So, in attempting to evaluate Geppetto for the purposes of ML, we encountered numerous limitations of the programming model and implementation. To list a few issues with Geppetto:

- Can only handle integers: in other words, no support for floating-point arithmetic. It is possible to cleverly design an ML implementation based on integers, but it wouldn’t be trivial and would need further feasibility investigation. It’s unclear whether this is a fundamental restriction on MQAP-based VC, or simply something not yet implemented in Geppetto. Based on our understanding of MQAP, it seems like a floating-point may indeed be a theoretical challenge to handle.
- Can’t handle negative integers. This seems like a limitation of the Geppetto implementation, and not of MQAP-based VC in general.
- Can’t handle modular arithmetic or integer division. This is also probably a limitation of the Geppetto implementation, not MQAP-based VC in general.
- Can’t handle dynamic loop sizes/control flow.
- Can’t handle dynamic indexing into an array/matrix.
- Nested loops can’t be partitioned/banked.

This analysis was based on Geppetto, which uses the MQAP idea that seems to be the most promising area of VC research available today. It became clear to us that Geppetto was not suitable for use with the current state of technology; but what about building a MQAP-based solution, based on the ideas from Geppetto paper, and tailored to the ML use case? In other words, *could we do better than Geppetto if specialized for the Consensus AI model? *There are a few obvious potential places for improvement:

- Hand-crafting MQAP specific to ML problems
- Overcoming Geppetto limitations
- Optimizing the source code itself
- Optimizing for the blockchain+ML use case

While we encountered limitations in the generic Geppetto implementation, we think there is potential to improve upon the initial research using other methods with further time and resources to explore the challenges noted. We require more time and probably more eyes on the problem — further work would have to be validated/vetted by the VC community before it could be accepted as sound. **We are continuing the research work on this problem, let us know if you want to help (email to ****research@sentient.org****)**.

**Next Steps: Aggregation of Private Data**

The second big item to solve is for the Sentient network to provide the ability to aggregate private client data without having to share the details — giving users controls over the data privacy at the same time providing ways to help financially benefit from making the data available for research purposes. This can be accomplished via the mechanisms of federated learning, but there may be other mechanisms to achieve this. We propose a scheme for using the difficulty of the discrete log problem (**DLP**) over integers **mod p** to accomplish two tasks:

- Collect provably-anonymous private data
- Provide proof-of-work to secure a blockchain

Since DLP is hard, this work can be used as a proof-of-work for a blockchain. Miners would be incentivized to solve by mining rewards and by additional incentives for data aggregation; since the resources of the entire mining community are working on the problem, hard versions of the DLP (i.e., larger values of ** p**) could be attempted. Miners only get rewarded for mining valid blocks, which would require aggregating some minimum number of values. The difficulty of solving DLP for the aggregate product is the same as solving DLP for any single instance; therefore the difficulty of finding out someone’s vote is of the same order of difficulty as mining, namely, outside the capacity of anyone with fewer resources than the combined mining community.

There are many potential applications of the network functioning in this way, e.g. survey/polls, voting, elections, sentiment analysis, corporate and organizational use cases. The advantage of this approach is that applications help both fund and secure the network—with organizations, developers, analysts, researchers, etc. using the private data collection API in a fee-for-service model.