LTO.Network: How Reducing Power Increases Utility

Segment 1: Intro:

For my third article, I’ll be taking a request from the company LTO.Network (referred to hereafter as LTO, for short) to analyze and explain their systems, particularly as they compare to other blockchains in potentially relevant spaces. Specifically, LTO is designing enterprise-focused individual blockchains that are each meant to represent a specific process flow between multiple divisions or companies. These enterprise and task-specific blockchains enable a granular, easily modifiable representation of workflows that turns every workflow into an enforceable, validatable contract. In short, think of LTO’s work as smart contractifying enterprise workflows. Well, not quite — in fact, the difference between smart contracts/LCs and LTO’s computation, called Live Contracts/LCs, will be the primary focus of this paper. But I’m getting ahead of myself.

Before briefly technically describing the system, I should take a bit of time to discuss my biases here. This is an article I was approached by LTO and asked to write, and I’ve had a number of conversations with their technical team and CTO about the white paper — so on an intuitive level, I’ll probably be predisposed to the system the same way I was towards Solana and Enigma. However, as with those two, I will be pointing out all concerns I have with the system as they become relevant.

Now that I have explained my bona fides, we can move on to LTO’s tech, their Live Contracts. For the more technically minded, LTO represents workflows as extended Finite State Machines, with additional pieces to cover multi-party communications, and uses hamiltonian zero knowledge proofs (as explained in a prior post of mine) to verify to parties that the correct actions within a workflow have been taken without revealing sensitive data.

For the less technical, here’s a short summary of the above: LTO uses a strictly less expressive system than smart contracts to represent business workflows. Specifically, their chosen system is incapable of representing arbitrary computations, and is more like Bitcoin’s scripting language than ethereum’s in terms of power, albeit with a little more power than Script. Also, LTO’s systems include inbuilt zero knowledge proof capability, as explained in my prior piece on privacy and security. In this enterprise case, zero knowledge is exactly what workflows need — since each enterprise knows the work/portion of the task they’re doing regardless, and the only necessary privacy is that other enterprises not discover that information.

A sample LTO Live Contract

So, there’s a description of the Live Contract system. Now, we can dive into the main body of this article: given that smart contracts exist, and, as said above, they are more expressive and powerful, why use Live Contracts at all?

Segment 2: Generalized thesis: More options not always better

Now, I expect that the idea of ‘losing options can be beneficial’ will seem almost paradoxical to many of you. As well it should! At first glance, the idea that having more choices and more expressive power can lead us to worse alternatives is seemingly patently ridiculous. It is therefore time to explain why your intuitions are (sometimes) dead wrong. We’ll begin, first, with an example from outside the CS world of option restriction.

I imagine many of you are familiar with the game of Chicken — for those of you who aren’t, however, I’ll provide a brief explanation. Basically, the game proceeds as follows: 2 people drive towards each other, and the first one to brake or swerve loses. If, however, the cars collide, then both people lose — and, of course, now have to fix up their cars!

Now, consider playing a game of chicken normally. Whether to swerve or not at each instance is a complicated decision, based on what you know of the other driver, how much you value winning, and so on. Let’s make life a little easier on ourselves. What if, before the game started, you called out to the other driver and just…destroyed your steering wheel? Now, you know the other driver has to swerve — if you drive forward, either he loses or he loses and has to fix his car. Clearly, despite the fact that your only action was restricting your options, you’ve made your position in the game far better! (this is, of course, slightly oversimplifying — in particular it elides the case of what happens when you, midway through destroying your steering wheel, look up to see your counterpart smashing his. Which is, of course, far from ideal. But I digress.)

Now, that’s one example. I can already hear your responses, though — so what if it makes sense in a contrived game like chicken? What does that have to do with real-life situations and the expressive power of code? Quite fair. I’ll now move to a more relevant example — but it will mean going into a bit of computer science. First off, consider a normal computer — you can read in large amounts of data, the system can hold one of many possible states, and you can push and pull lots of data from working memory. This intuition about computers can be formalized as an entity called a turing machine/TM — think about TMs as an extremely simple computer with its own internal state that operates on an extremely long line of tape, where the tape serves as input, memory and output. At each step, the computer reads in the tape at its current spot, then based on its internal programming and current state makes 3 choices: 1) how to update its current state, 2) whether to change what’s on the tape below it, and 3) whether to move right one space, left one space, or stay still. Now, this seems incredibly simple, but in fact is provably equivalent to modern computing in terms of power — anything our machines can compute, so can a turing machine (albeit given a lot more time).

Now, TMs are well and good — but, sadly, there’s one big problem with them. It’s very difficult (and, in fact, provably impossible), to analyze certain facts about TM programs (and thus modern computer programs). In particular, analyzing nontrivial semantic properties about a program — i.e. figuring out a program’s behavior on general classes of input — is provably not possible. To give a simpler example, say you have to solve some task, like going through a long list and picking out the odd numbers. You don’t want to code a script to do it, so you send it out to your friend. Your friend sends you something completely unreadable, but it does seem to be a well-formed program — so if you could analyze it to make sure it actually took out all the odd numbers, that would be good enough. Unfortunately, you provably can’t for a general task of this sort — while you can check the program’s performance on test cases, that doesn’t show you that the program works perfectly unless your test cases are perfect! This is obviously an untenable situation for tasks where we need to be sure our code will work every time — the rise of code auditing businesses shows us that much. However, is this how development has to look?

Segment 3: Make illegal states unrepresentable:

At the end of the last section, I left you all with a seeming catch-22: developing on Turing machine equivalent devices — which all computers provably are — leaves us with the paradox of untestability — we can’t build test architectures to ensure a program works perfectly unless our test architecture is also itself perfect! Now, we have two paths we can take from here. 1) accept this vulnerability as a necessity, and focus on building good enough tests to get around it. This path is the only feasible one for generalized development, and is what we therefore need to content ourselves with most of the time. It isn’t perfect, though — and many recent bugs in the cryptocurrency market can be traced to failures of test architecture in checking for common mistakes (take, e.g., reentrancy bugs, a common failure state of many SCs including but not limited to the original DAO). However, if we know something about our problem/task’s structure, we can take advantage of that for path 2) formalize certain invariants (things which are always true) in order to make certain classes of failure states not only difficult, but impossible to represent.

Let me explain the above with a simple example. Imagine we’re building a system meant to handle the buying and selling of widgets that come in packs of four. Now, we could just represent the amount of widgets as a raw number, and rely on all the functions handling buying and selling to only be fed numbers that are multiples of 4 by the customers/employees. But that’s a great way to immediately run into a catastrophic bug when somebody screws up, and we inexplicably have an extra widget in the system that nobody can account for. We could, of course, also fill our software with test cases to ensure we’re only using multiples of 4 — and, for something this simple, that wouldn’t even be that hard. But there’s a simpler way. What if, instead of focusing on the number of widgets, we instead represented by the number of packs — and only multiplied by 4 in the last step before displaying ‘widget number’ or before calculating price per widget? Then we wouldn’t even have to test that invariant: it would be impossible to store a number of widgets in the system that wasn’t a multiple of 4!

That widget example is simple, but nonetheless explains the idea behind invariants: you can profitably restrict your ability to represent information in order to make errors not only avoidable, but unrepresentable. Therefore, even though you might still be developing on an unverifiable machine, you can restrict your ability to use that machine to a subset of verifiable uses. This is the insight behind Live Contracts, which we’ll cover in more detail in the next section.

Segment 4: LCs vs SCs:

Now that we’ve described the concepts behind invariants, and how restricting computational power can lead to more useful and less error prone systems, we can move on to comparing LCs to the more typical SC. Before getting into LCs, let’s first explore the smart contract side — this turns out to be rather simple: Smart contracts, as a general class, are Turing-equivalent. They can do all the computation a normal computer can do, but are just as limited by lack of non-test verification.

Now, onto Live Contracts. Before describing the system, let’s first get an idea for what invariants we want to be working under. Remember: the problem domain is Enterprise Workflow, with each LC existing to solve a particular task. What is the maximum amount of power we need, and what assumptions/invariants can we rely on enough to reify?

Invariant 1: Constant Numeracy Of Parties: We can first note that any individual enterprise task is going to pass through the same number of divisions and companies as it did every other time it was performed (though different amounts of effort might be required at each stage). This means that we can keep all contracts bounded in size — i.e., we can make an LC for a specific task expect inputs from a certain number of people, in a specific order (or one of a set number of possible pathways).

Invariant 2: Sequential Steps: Enterprise workflows are generally a matter of tasks taken in a specific order and completed one after the other — first you requisition something from one company or send something to them for signature, and then it gets returned/accepted/denied and you respond, and so on. We can generally assume that either events within a workflow are sequential, or that events which happen in parallel can be packaged into single events within the chain of sequential events.

Invariant 3: Constant number of known possible states: When we create a workflow, we know that the system the workflow is modeling can only be in a certain number of possible states, represented by the nodes within that workflow. If, for example, our workflow for a given contract specifies that a company can either be in contract, waiting for contract reception, or out of contract, we don’t need to worry about the case that the company is both out of contract and waiting for reception (the question of what to do with a state like that in real life is an important one! But it’s a question of workflow design accurately representing reality, not of representing those workflows programmatically).

Those are the main invariants we can rely on — note crucially that neither determinism (the lack of randomness) or Information-completeness (not needing outside-workflow information to determine the correct next state) are assumed here. The first allows us to account for things like communications failures within the workflow, the second allows us to model things like ‘we’ll build this system contingent on our outside contractors approving it’ without extending a workflow to include the entire approval process.

So, now that we have these invariants, is there a system which handles them? Turns out, the answer is yes: the aforementioned Finite State Machine based model on which LCs are based. FSMs are just powerful enough to allow for computations with the requirements we’ve described, but weak and unexpressive enough to allow for analysis and formal verification.

So, what can this system actually get us? Let’s explore a couple of simple examples of enterprise workflow that LTO makes easy.

1: Multi-party Contract Signing:

Imagine that you have some contract that you want to transfer among some preset number of parties, with signatures coming from each party, and with the contract potentially changing from signature to signature with any change requiring resigning from all parties. In the current world, the only way to do this sort of thing would be with human conveyance of the contracts, or trying to graft together multiple internal electronic systems. Under LTO, this is an incredibly easy system to model and verify (store a hash of the contract as on-chain data, have states of every combination of parties signing, reset to initial state when the hashes don’t match). This system as presented does require a new contract for each potential combination of parties, but this is analogous to needing a new contract when a new party gets involved in a deal in the real world.

2: Encrypted Data Transfer:

Imagine that you have some piece of encrypted data that you want to pass from party to party under preset rules, in such a way that you can verify the limitations on that data access statically — i.e. Without exposing the data to any risk of loss. In the current world, we’d be limited to automatic testing/pentesting of the smart contracts/access control solutions under which the data was stored and transferred. Under LTO, it’s possible to formally analyze and verify the live contract controlling access to the data, increasing security and decreasing the risk of data security being compromised. I.e. we don’t need test cases to verify the functionality of our data transfer, and can analyze the contract without needing sample cases/uses.

Of course, LTO has more use cases than just these — the two chosen are ones that LCs are particularly powerful for.

Now that we’ve explored the motivations for LCs, have we said all there is to say? No — there are a few more pieces of the LC system to explore, as well as a couple of points to make about when not to use LCs — the lack of power does cost you certain use cases for which a standard smart contract or traditional database might be better suited.

Segment 5: Desideratum and Conclusions:

Now that we’ve covered the bulk of LTO’s Live Contracts system, all that’s left is to cover a few more side points before ending with a succinct conclusion.

Side Point 1: The Oracle Advantage:

As any of you who have been following projects like Chainlink may know, current smart contract systems have a lot of trouble integrating oracles, or third party information sources, into the running of the blockchain. Now, this happens for a number of reasons, but the primary one is that there’s no clear way to integrate oracles into existing blockchain systems in ways that preserve the security of the system/formalize the roles of those information sources. (This isn’t to say that such a development is impossible, far from it — merely that it’s, as of now, a problem without a standardized solution). Under LTO, meanwhile, oracles can be modelled as another party to any given workflow as long as their responses can be succinctly characterized (i.e. only discrete oracles can easily be modeled under an FSM). While this doesn’t allow for total oracle integration — in particular, oracles providing analog information don’t neatly weave into LTO’s systems, it does provide a solution for many common use cases.

Side Point 2: What doesn’t LTO do:

Much of the above article focused on the fact that LTO was intentionally limiting their own expressive power, but I didn’t actually provide any examples of concrete tasks that FSMs cannot perform. So, here are a couple of simple examples that should hopefully provide a good intuitive picture of where FSMs fail.

1: Unbounded Voting:

Imagine you want to take votes from some arbitrary number of people for or against a policy, then return the winner. Doing this with smart contracts is easy — in fact, a more complex version of this is, quite literally, the demo contract on Redux. However, doing this with an FSM and thus an LC is provably impossible (I won’t go into the proof here due to lack of space, but the short version of it is that FSMs are provably equal to regular expressions, and there’s a simple reduction from unbounded voting to Parenthesis Matching, a problem which regexes provably cannot solve).

2: Analog End States:

Imagine that you have some workflow where you want to take in the risk assessments of a number of parties and return a normalized average or product of those assessments. With a typical SC on a private blockchain, this is simple — just have each party submit their assessments and compute the desired function. However, on LCs, we’re limited to digital/discretized end states, and therefore can’t return an end state which is an analog normalization of data.

Now, it should be emphasized that these failings aren’t unsolvable — in particular, both could be solved by allowing subsections of an FSM to be represented by an internal SC, in effect sacrificing a portion of contract verifiability for added functionality. And this, in fact, leads me to my conclusion.

Blind Comparison Of Relative Capabilities is Almost Meaningless.

Let me explain what that means. Say you took a latex glove and a winter glove and compared them among several axes: protectiveness, ability to withstand elements, and durability. The winter glove wins along all 3 axes — and, nonetheless, nobody would prefer their surgeons wear winter gloves while operating. So why are we doing blind comparison between cryptographic projects? It seems, to me, like the better course of action is seeking to explore how well each project is optimized around the demands of its market and problem — that is asking the relevant question.

Postscript: at LTO’s request, I’ve attached links to their public presences below:

Website: https://LTO.network
Technical paper: https://bit.ly/2IUJhdp
Twitter: https://twitter.com/LTOnetwork
Medium: https://medium.com/LTOnetwork
Telegram: https://t.me/LTOinfo

--

--

The Crypto Realist
Harvard Undergraduate Blockchain Group

CS major at Harvard, focusing on vulnerabilities of complex systems. Aiming to cut through the hype surrounding crypto projects. CTO of HUBG.