# Fantom Archive— Distributed Computing

by Andre Cronje

# Introduction

Distributed Computing in a decentralized context is currently only as efficient as the consensus winning node. The goal is to have the combined computing power of all participants. Instead we have the computing power of a single node. We propose a cheap, asynchronous, fault tolerant, verifiable alternative to current decentralized solutions.

# Background

# Price of Computing

The current computing standard is cloud computing services. AWS, Google, Azure. We will consider AWS as our benchmark.

t2.medium on-demand: $0.0464 per Hour

t2.medium spot-pricing: $0.026 per Hour

t2.medium reserved instance: $0.027 per Hour

Lambda: $0.20 per 1 million requests, $0.00001667 per GB-second of compute

Aurora(RDS): $0.1 per GB-month, $0.2 per 1 million requests

Cost Example

Low-traffic Account CRUD API (Lambda)

~400,000 requests per month

$0.25/mo (@200ms per request)

Low-traffic Account CRUD API (Aurora — RDS)

~400,000 requests per month

$0.3/mo

Comparing an Ethereum Solution

Low-traffic Account CRUD API (Ethereum)

~400,000 requests per month (Average between inserts, updates, and balance requests)

3.3 billion gas @ 50 gwei = 168 ETH = $68,208.00

Cost differential

The principal of a world computer is to be able to leverage all of the computing capacity of all participating nodes. Given the Proof-of-Work consensus design, only a single block can be created every epoch. All nodes must execute the same block to arrive at the same state. Thus, instead of having the cumulative computing power of all nodes, we have the computing power of a single node.

# Compute vs Storage

AWS S3: $0.023 / GB

AWS Glacier $0.004 / GB

There is no linear correlation between compute capacity, and storage capacity. Storage and Computing can not be tightly coupled in a decentralized environment.

# Compute Architecture

- CPU/GPU perform stateless computing.
- Memory allows for temporary fast access storage.
- Disk allows for permanent* slow access storage.

Compute architecture leverages off-of the above 3 principals. Fast computing loads from slow storage what it needs into fast access memory to allow for computing to manipulate fast access memory and then write back to slow storage.

# Tightly Coupled Architecture

Storage and Compute in Ethereum VM are tightly coupled. They are inherent in the OPcodes of the virtual machine.

With no correlation between Computing and Storage, and no correlation in Compute Architecture, this tight coupling must be removed to be feasible.

# Redesigning Stateless Computing

Network of *n* participants.

Data source of *ds*.

Execution set of *c*.

*1/n* compute nodes executes *c *with inputs *ds*.

Compute Node Risk

- Infinite execution time.
- Out of memory.

Compute Provider Risk

- Node terminates before execution finishes.
- Inaccurate execution results.

Problem Statement #1

The compute node must be able to assess the compute requirements of the computation.

Problem Statement #2

The compute node must be able to assess the memory requirements of the computation.

Problem Statement #3

The compute provider must be able to have confirmation of execution results.

Problem Statement #4

The compute provider must be able to confirm the validity of the resulting output.

# Time Complexity

Constant O(1)

Logarithmic O(log n)

Polylogarithmic O((log n)k)

Sub-linear o(n) or O(n1/2)

Linear O(n)

Quasilinear O(n logk n)

Sub-quadratic o(n2)

Polynomial O(nk)

Key qualities of a compute job would need to include

- Determinism
- Precise computational steps

The Ethereum Virtual Machine established this by linking Gas costs to each OPcode execution. Thus calculating execution time* and memory requirements* for a given set of inputs. This cost is tightly coupled with execution.

A compute node can given a set of inputs (data source) and set of instructions be able to assess the execution and memory requirements of the specific job.

# Job Scheduling

Related works in centralized scheduling

Cloud task and virtual machine allocation strategy

- Total execution time of all tasks

Optimal scheduling of computational tasks

- Execution time — Virtual Machine Tree

A deadline scheduler for jobs

- Execution time and cost — Cloud Least Laxity First

Job scheduling algorithm based on Berger

- Execution time

Online optimization for scheduling preemptable tasks on Iaas cloud systems

- Computational power, execution time, bandwidth — Dynamic allocation

A priority based job scheduling algorithm

Performance and cost evaluation of Gang Scheduling in a Cloud Computing system with job migrations and starvation handling

- Response time and bounded slowdown

Given *n* nodes we can make the following assumptions.

- Optimized efficiency has exactly
*1/n*nodes execute the given compute task. - Fault tolerance has at least
*3/n*nodes execute the given compute task. - Multi Party Computing (MPC) for verifiable results has at least 3/n nodes execute the given compute task.

Given that MPC requires *3/n* and fault tolerance requires *3/n*, we require *9/n *for fault tolerant multi party computation. This is an optimized strategy.

# Verifiable Computing

Related works in Interactive Proofs

Verifiable Computation with Massively Parallel Interactive Proofs

- GPU parallel processing for arithmetic circuits of polylogarithmic depth
- No privacy
- Publicly Verifiable
- Efficient
- Circuits of polylogarithmic depth

Allspice: A Hybrid Architecture for Interactive Verifiable Computation

- Verify computations expressed as straight-line programs.
- No privacy
- Publicly Verifiable
- Amortized Efficiency
- Arithmetic Circuits

Pepper: Making Argument Systems for Outsourced Computation Practical (Sometimes)

- Arithmetic constraints instead of circuits, very limited class of functions
- No privacy
- Not Publicly Verifiable
- Amortized Efficiency
- Arithmetic Circuits

Ginger: Taking Proof-Based Verified Computation a Few Steps Closer to Practicality

- Improvement on Pepper, larger class of computations
- No privacy
- Not Publicly Verifiable
- Amortized Efficiency
- Arithmetic Circuits

Zataar: Resolving the Conflict Between Generality and Plausibility in Verified Computation

- PCP using algebraic representations as Quadratic Arithmetic Programs
- No privacy
- Not Publicly Verifiable
- Amortized Efficiency
- Arithmetic Circuits

Pantry: Verifying Computations with State

- Zataar improvement which allows verification of stateful computations
- No privacy
- Not Publicly Verifiable
- Amortized Efficiency
- Stateful

River: Verifiable Computation with Reduced Informational Costs and Computational Costs

- Quadratic Arithmetic Program based verifiable computing system.
- No privacy
- Publicly Verifiable
- Amortized Efficiency
- Arithmetic Circuits

Buffet: Efficient RAM and control flow in verifiable outsourced computation

- Support for general loops

Related works in Non-Interactive Proofs

Pinocchio: Nearly Practical Verifiable Computation

- Arithmetic circuits, no input-output privacy

Gepetto: Versatile Verifiable Computation

- Public verifiability, no input-output privacy.
- No Privacy
- Publicly Verifiable
- Amortized Efficiency
- General Loops

SNARKs for C: Verifying Program Executions Succinctly and in Zero Knowledge

- Increased overhead. Based on Quadratic Arithmetic Programs. Public verifiability without input-output privacy.
- No Privacy
- Publicly Verifiable
- Amortized Efficiency
- General Loops

Succinct Non-Interactive Zero Knowledge for a von Neumann Architecture

- Quadratic Arithmetic Program based SNARK with more efficient verification and proof generation. Universal circuit generator.
- No Privacy
- Publicly Verifiable
- Amortized Efficiency
- General Loops

ADSNARK: Nearly Practical and Privacy-Preserving Proofs on Authenticated Data

- Straight-line computations on authenticated data. Publicly verifiable and more efficient privately verifiable proof
- Input Only Privacy
- Publicly Verifiable
- Amortized Efficiency
- Arithmetic Circuits

Block Programs: Improving Efficiency of Verifiable Computation for Circuits with Repeated Substructures

- More efficient way to handle loops in a program

Related works in Fully Homomorphic Encryption

Non-Interactive Verifiable Computing: Outsourcing Computation to Untrusted Workers

- Yao’s garbled circuits with Fully Homomorphic Encryption, privacy preserving
- Input & Output Privacy
- Not Publicly Verifiable
- Amortized Efficiency
- Arithmetic Circuits

Improved Delegation of Computation Using Fully Homomorphic Encryption

- Input & Output Privacy
- Not Publicly Verifiable
- Amortized Efficiency
- Arithmetic Circuits

Related works based on Message Authentication Codes

Verifiable Delegation of Computation on Outsourced Data

- Bilinear maps and closed-form efficiency. Preprocessing stage.
- No Privacy
- Not Publicly Verifiable
- Amortized Efficiency
- Polynomial of Degree 2

Generalized Homomorphic macs with Efficient Verification

- -linear maps, supports arithmetic circuits of depth

Efficiently Verifiable Computation on Encrypted Data

- Multivariate polynomials of degree 2 with input privacy
- Input Privacy
- Not Publicly Verifiable
- Amortized Efficiency
- Polynomial of Degree 2

A number of strategies exist for verifiable computing.

- Interactive Knowledge Proofs
- Interactive
- Variable attestation
- Computation Overhead
- Example: Buffet
- Non-Interactive Knowledge Proofs
- Trusted Setup*
- Fixed attestation
- Computational Overhead
- Example: ADSNARK, zkSNARK
- Trusted Execution Environments (TEE)
- Hardware Requirement
- Centralization
- High Security
- Fast Execution
- Example: TPM, TXT, Intel SGX, ARM TrustZone
- Multi Party Computation (MPC)
- Multiple Participants
- Unequal reward structures
- Example: Ethereum

Viable Options include

- zk-SNARK wrapped Virtual Machine (Execution overhead, low barrier to entry)
- TEE wrapped Virtual Machine (Execution optimized, high barrier to entry)
- MPC with Probabilistic Payments (Execution inefficient, low barrier to entry)

# Deterministic Verifiable Computing

Given;

- Fixed cost Opcodes
- Verifiable Computing
- Multi node fault tolerance redundancy

We can accomplish;

- Fixed length execution assessment
- Compute cost execution assessment
- Verifiable Computing Results
- Fault Tolerance

# Decoupling Cost

A fee based token economy marketplace will be required to ensure fair value.

Tasks are associated with a bid: the amount of coin offered for performing the task. Bidders can adjust the bid on their pending tasks to expedite their execution. Workers will decide if a bid is sufficient for them to execute upon. Workers will execute work when profitable for them to do so.

A compute node can decide to execute a given job for less than the expected fee payment. A compute provider can provide a fee less than the indicated execution cost.

# Job Scheduling Conflicts

Given

*n*nodes*k*as our fault tolerance variable

We want *k/n* to execute a given job with the following criteria

*k*is willing to accept the job based on the OPcode cost*k*is willing to accept the job based on the execution fee proposed

Risk

- No nodes willing to accept the job (No profitability attack)
*< k*nodes are willing to accept the job- All nodes are willing to accept the job (Too profitable attack)

Job scheduling with a scheduler is a fairly well established field of study. In a decentralized environment, we lack a scheduler.

# Job Scheduling

To achieve decentralized job scheduling we discuss the following strategies

*k/n*selection with a first come leader selection- Stake based leader selection
- Cryptographic Sortition participant selection

# Job Propagation k/n

Job requests are unique. Each node holds a list of all job IDs they have received.

Each node that receives a request attempts to act as a job scheduler.

Upon receiving a compute request with an empty job order the compute node *n1* confirms if they are willing to accept the job. If they are, they sign the request, randomly select *k* nodes, adds each to the job order and propagates to each node.

Each node receiving the request can accept or deny the request. If they accept, they sign and propagate to the listed job order nodes. If *y* nodes have accepted, but *k* has not been reached, *n1* elects *k-y *new nodes and continues this cycle. If *n1 *knows < *k* nodes *t* it sends the job to *t* nodes and elects one of *t* nodes as the new leader.

Job Schedulers are allowed to collect the fees.

- Risk of less than
*k*nodes known - Denial attacks by
*n1* - Centralized towards
*n1* - Long potential job acceptance time

# Job Propagation Leader Selection

Compute nodes stake tokens for leader election. Each epoch a leader *nl* is elected to be the job scheduler. The elected leader sends out the job request signed with it as leader. Receiving nodes can validate that *nl *is the correct leader for the epoch. If a node is willing to accept the job they sign the request and send back to the leader. The leader can then choose a subset of nodes to execute the given job.

Further optimization: Job scheduling based on cryptographic proofs of resources available*

Job Schedulers are allowed to collect the fees.

- Risk of malicious
*nl* - Denial attacks by
*nl* - Fast job acceptance time
- Further scheduling optimization possible

# Job Propagation Cryptographic Sortition

Compute nodes each have an algorithmic lottery chance. The job ID is used as the seed for the Cryptographic Sortition, all nodes willing to participate, participate in the sortition. Nodes with a winning match are allowed to participate.

The highest hash seed matching the winning ticket is awarded the fees if they complete the job.

- More than
*k*participants

# Deterministic Verifiable Job Propagation Compute Marketplace

Given;

- Fixed cost Opcodes
- Verifiable Computing
- Multi node fault tolerance redundancy
- Job Propagation
- Fee decoupling

We can accomplish;

- Fixed length execution assessment
- Compute cost execution assessment
- Verifiable Computing Results
- Fault Tolerance
- k/n selection for efficient computing
- Cost efficient marketplace

# Conclusion

A deterministic, verifiable, market value driven, fault tolerant, asynchronous world compute engine. We allow for multiple verifiable compute strategies as well as multiple job selection strategies. The proposal allows for a compute environment that gravitates towards a market value compute requirement with low cost resources available for the ecosystem.