# FOLD<N> Macro for Loops(`for`, `foreach`) in RIDE

*It’s not uncommon that RIDE doesn’t have any cycle/for/goto primitives. That fact constantly frustrates developers who need to traverse a collection, calculate the sum of an array, filter list of items of match an array of signatures against an array of public keys. However, this limitation does not come from an oversight or unwilingness to deliver proper developer experience: it’s a matter of environment limitation: the execution engine(not the programming language) needs to stay Non-Turing Complete.*

But first, a bit of history.

## Turing Completeness

The definition of the term is tied to a mysterious Turing machine which manipulates symbols on a strip of tape according to a table of rules. But what’s in that for 2019? Simply put, *turing completeness of a language means the ability to encode any algorithm within the language’s primitives*. Any means *any *— including brutforce hacking an ellipic-curve signatures, or even an infinite one, that will never provide result, like:

`while(true) {`

i = 0

}

## Blockchain, RIDE

Virtually all languages you can find are turing complete, including, of course Solidity. The problem is the environment, namely blockchain. Noone wants decentralized network to evaluate an infinite algorigthm — that will break a lot of things, to say the least.

That’s why Ethereum has Gas — the ability to charge executor on a per-instruction basis at runtime. This is a solid plain-simple solution, but when it meets environment with neccessary gas limit, it results in failed transactions due to either underprovided gas or even gaslimit, frustrating users which still need to pay for it.

Within Waves, we adopt different model: RIDE is explicitly non-turing complete, doesn’t have cycles and loops in a language, no backward-GOTO-like operator in its bytecode. That allows us to calclulate computation costs upfront and reject not even the execution, but the deployment of a dApp itself.

# Problem with ‘foreach’

Inheritetnly, given an array of an elements without knowing its size at **compile time**, you can’t estimate compleixty.

`let arr = getString(this,"key").extract().split("|")`

**arr.foreach(a => a + "A") # imaginary code**

In this example the complier hits last line and has no idea what arr is and therefore can’t estimate the execution cost of the line, and neither do you, by the way.

If you put a list traversal inside a list traversal, things get ugly polynomyally faster. Let’s also get an intense computation in a `map`

`let arr = getString(this,"key").extract().split("|")`

**arr.map { a =>**

arr.map { b =>

arr.map { c =>

sigVerify(foo,bar,baz)

}

}

}

That’s N³ signature verifications with N being unknown. You might say something like “*Just don’t allow loops in a loops!*” or “*Just don’t allow sigVerify in a loop!*”, but if you think carefully, you’ll see how poor decision that is. Regardless of the fact that it doesn’t solve any problem, doesn’t allow for proper cost estimation, it makes the language non-context free, which makes it very poor design decision.

# Loops

If we could estimate the cost of every expression, we are good. So, let’s add the parameter: amount of iterations:

`let arr = ...`

arr.map(10, { a => a + 1 })

What if the array is bigger than 10? Let’s throw exception then.

But if the paratemer is of type Integer, that is also unknown:

let arr = ...

let steps = arr.size()arr.map(steps, { a => a + 1 })

# Macros

Macros for a programming language are *syntactic *sugar on steroids. When a compiler sees a macros, it rewrites the code it encloses and then compiles the actual output. They are extremely useful due to inability to express things within existing language constructions, but also extremely hard to debug when things go wrong. *It’s so great that we don’t have a debugger.*

# Fold functions family

One higher-order functon existing in tons of languages is a family of `fold`

, `foldLeft`

and `foldRight`

. Scala’s one looks *like *this:

`def fold[A](col: List[A], z: A, op: (A, A) ⇒ A): A`

def foldLeft[B](col: List[A], z: B, op: (B, A) ⇒ B): B

def foldRight[B](col: List[A], z: B, op: (A, B) ⇒ B): B

where `z`

is the value we start folding with, and then subsequently “modify” it with `op`

function and the next element in collection.

These functions are extremely powerful: One can define `sum`

, `filter`

, `zip`

, `exists`

— almost anything — just with them. No need for `foreach`

.

# FOLD<N> Macros for RIDE

Fist, some intuition on how to use it:

`func sum(a:Int, b:Int) = a + b`

let arr = [1,2,3,4,5]

**let sum = FOLD<5>(arr, 0, sum) # result: 15**

This reads: *make **up to **5 steps, on each step take an accumulation result, the next element, apply **sum** function to the pair, return accumulation restult.*

The `filter`

function:

func filterStep(a: Int, accumulated: List[Int]) =

if (a % 2 == 0) then a :: acumulated else accumulatedlet arr = [1,2,3,4,5]let evens = FOLD<5>(arr, 0, filterStep) # result: [2,4]*

** You will actually get [4,2], but this is solveable*

When compiling this, the compiler *simply *unwraps:

**let result = FOLD<5>(arr, acc0, function)**

to something like:

`let restult = {`

let size = arr.size()

if(size == 0) then acc0 else {

let acc1 = function(acc0, arr[0])

if(size == 1) then acc1 else {

let acc2 = function(acc1, arr[1])

if(size == 2) then acc2 else {

let acc3 = function(acc2, arr[2])

if(size == 3) then acc3 else {

let acc4 = function(acc3, arr[3])

if(size == 4) then acc4 else {

let acc5 = function(acc4, arr[4])

if(size == 5)

then acc5

else

throw("Too big array, max 5 elements")

}}}}}}

Again, you will never see this code. Well, that’s *exaclty *what you’ll see in Decompiler at least at first, but if you decompile stuff, this means you are a developer and therefore you should be able to understand the trade-offs.

# Trade-offs

Pros:

`sum`

,`mult`

,`filter`

,`contains`

,`exists`

,`average`

— anything that can be expressed with`fold`

- No need to change Estimator or Evaluator — everything happens in pre-compile time
- For that reason, no hardforks are needed for the new functionality.

Cons:

- You have to know the max size of your collection.
- If you get more elements than expected, you get Exception
- Computation cost will be linear to that max value
- Script size will be linear to that max value

## Multisig

For an unknown reason, one of the top questions on our multisig example is “*How to make requirement for signature positions in proofs array non-strict so that one could shovel signatures to the transaction as they come?*” Anyway, here’s how(4 of 5):

{-# STDLIB_VERSION 3 #-}# array of 5 public keys

{-# CONTENT_TYPE EXPRESSION #-}

{-# SCRIPT_TYPE ACCOUNT #-}letpks = [base58'', base58'', base58'', base58'', base58'']# inner fold step for each signaturefuncsignedBy(pk:ByteVector) = {

# is signed by the public key or not

funcf(acc: Boolean, sig: ByteVector)

= acc || sigVerify(tx.bodyBytes, sig, pk)

FOLD<5>(tx.proofs, false, f)

}# outer fold step for each public keyfuncsignedFoldStep(acc:Int, pk:ByteVector)

= acc + (if(signedBy(pk)) then 1 else 0)# comparing total number of correct signatures

# to required number of correct signaturesFOLD<5>(pks, 0, signedFoldStep) == 4

A bit cryptic an too short to understand at first glance, but this works.

## So what, Turing Complete now?

Almost, the missing piece is `FOLD<INFINITY>`

. But in dApps, this should never happen anyway. Let’s say it’s *Turing-Complete Enough.*

*TLDR: Loops are simulated via FOLD<N> macros, which requires maximum amount of iterations in compile-time, later unrapping to basic primitives.*