Adventures in the Rho Calculus

The heart of Goldman Sachs’ billion dollar secret

UPDATE: More musings on higher calculus can be found here

I will get to blockchain in a moment, but I am dangling serious candy-striped clickbait here in that Wall Street spent billions of dollars chasing what I am about to cover, and this is part of a larger series:

There’s a piece of Goldman Sachs that every other bank wants. Billions have been spent in its pursuit, but with the possible exception of J.P. Morgan, no other bank has achieved it.

Some Disruptions are Too Disruptive

Who cares about stuffy NYC banks right? Except that even the mighty Google recently admitted that banks have all the latest hotness. Silicon Valley “team” culture strongly prefers conformity these days — and once you start babbling about the strange world outside the von Neumann matrix, you recall H.G. Wells “Country of the Blind” and decide it might be better to keep this topic away from HackerNoon and stay quiet.

Risk has its rewards

Manhattan did exactly that, although for more practical reasons: this is competitive IP. Besides, horizontal technology is a tough sell even on the best days — and for good reason — there are plenty of horizontals that come and go and most fail because they operate at the wrong level of abstraction. Fortunately we now understand the internet and the “cloud”. But what about persistent memory? This turns a lot of conventional computer science upside down. Even if you draw comparisons with Oracle, it’s good to remember Larry Ellison initially failed miserably at describing relational technology to CFOs and eventually resorted to just telling prospects whatever they wanted to hear… which worked great until the industry finally caught up with the underlying (and largely experimental) codebase — Oracle 5 at the time — and nearly bankrupted the company. Fortunately he recovered but that story haunts NYC finance execs — how much wealthier would Ellison have been if he simply kept his SQL system under wraps?**

Early Oracle sales presentations

Agile = Survival

I’ve covered a laundry list of radical breakthroughs that NYC ‘codebuilder’ technology brings to the table ad nauseam but bottom line Goldman practically invented Scaled Extreme Agile:

The appeal is summed up by an ex-Goldman senior technologist who worked with SecDB for years. “When Lehman went bust, it took us about twelve hours to work out Goldman’s credit exposure to Lehman across each of the entities we’d been trading with. Because every trade that GS had ever done with anybody was recorded in SecDB, we had the data by Saturday night. Other banks were still trying to work out their exposure months — yes *months* — after Lehman went under.”

Will your company be able to react like that during a crisis? Probably not and frankly Wall Street prefers it that way. Banks mainly built these systems as insurance policies, not because they are wild-eyed early adopters enamored with bleeding edge tech. Agile is more about survival than ROI. Ask anyone who has gone up against Amazon.

Your CEO might be thinking: Why haven’t I heard about this before?

If you’ve been playing poker for half an hour and you still don’t know who the patsy is, you’re the patsy — Warren Buffet

Banks aren’t about to share their deepest secrets. In a debt crisis, banks stand to make fortunes by confiscating borrowed assets for pennies on the dollar, but there is a catch — they must first survive the crisis themselves.

So what specifically did SecDB do?

The Origins of Blockchain

Essentially Goldman built a distributed transaction log of everything their global corporate machine did — an enterprise-wide blockchain of sorts. But writing software for this sort of system is not trivial and certainly not something that fits with traditional Silicon Valley thinking. It’s only when multiple banks started trying to figure out how to co-mingle these transactions for traceability and regulatory purposes did the topic of blockchain-esque designs start in earnest. SecDB was software built for risk management — with the primary risk being the software system itself. This was a hot topic after the 2008 crash and probably not coincidence that Craig Wright’s research (and the others!) was in this same area and Satoshi Nakamoto emerged about the same time. Given the number of Russian developers working in NYC, it is also not surprising that E Europe is now booming with crypto startups.

Remember the First Rule of Risk Club

SecDB was top secret until 2015 and Sergei Aleynikov is still facing prison time for murky reasons. The problem with building enterprise “trust” systems like this is any funny business can be inadvertently uncovered. If anything, Singh and Higgins demonstrated that Goldman and JP Morgan ran more honest shops than others and we see a curious absence of these systems (save Credit Suisse) in supposedly heavily-regulated Europe. But enough of this story has now leaked into the public domain that talking about it won’t end in bizarre suicide.

I’m not here to debate ethics, but it seems fitting that both Silicon Valley and Wall Street have unleashed leviathans that are about to do war with each other. Meanwhile, let’s scurry below and get into some wonky coding.

Hot-Loading Dilemma

The basic problem with hot-loading a program is this: How do you get your fresh code to “run” after it has been loaded? For an empty program, the industry has agreed upon an entry point usually called main(). But what about a program that is already running? And what if you load more than one thing? How can code possibly be expected to call code it previously didn’t know about — especially if your software is full of rigid function calls (and yes even functional programming can get gummed up with hard-wired lambda chains). Hot-loading is important in blockchain world because you can’t “stop” Ethereum to load a new contract, nor can you stop global markets. And there are no source code files either.

Reactive Systems

A lot of post von Neumann thinking revolves around event-driven systems and reactive technology. For years, this stuff has been lurking in (1) database triggers and (2) UI/UX programming; hence it makes sense that dynamic languages like JavaScript work particularly well with persistent memory schemes. Unfortunately NYC spent a lot of money discovering this the hard way: Python is utter hell for creating a database. On the flip side, SQL is utter hell for writing programs. Is there a middle ground?

The WHEN Gate

If you can think back to the list of microprocessor instructions you learned in school, it is clear that reactive programming does not fit well with conventional chips because we are missing what Carl Hewitt describes “the inability to express time” — namely the ability to suspend processing until some precondition is met:

WHEN this DO that

Not IF this but WHEN this…and we have no idea exactly when. Much like event programming, this statement kinda floats around by itself (whereas IF statements are baked into surrounding control flow). We could certainly spin in a loop and keep checking, but that means tying up that processor. Ideally we sit in a low-power state or do something else until ‘this’ happens — much like a latch. But if there are multiple inputs there is no particular way to predict when all inputs A B and C will finally be true:

WHEN (a AND b AND c) DO that

Suppose A becomes true after about 5 minutes, C two minutes later and then B an hour after that. In the AI world, this is analogous to a neuron that sits dormant until it “fires”. In fact, Field-Programmable Gate Array (FPGA) chips work essentially this way — they run logic blocks that continually sense input signals and then eventually decide to fire message signals down the wire (usually to the next logic block). But power is always the big issue with chip design and you don’t want to be spinning constantly checking for conditions. Also, most programming languages were built long before FPGAs. The hardware companies (Intel, HPE, ARM etc.) are fully aware of all this and are madly cooking up reactive chips in similar secretive Goldman fashion despite being well out of their depth.

As an interim solution, the async programming / multi-threading communities have introduced concepts over the years like map/reduce, futures, promises, cyclic barriers etc. but these all have stormy relationships with von Neumann and get messy fast even with low-level hardware support. Even in the single-threaded Node world, we would like to apply something like promise.all except that Promises normally like to wait for code blocks to return, not data state changes.

But suppose we also might like something to happen whenever any of the inputs change:

WHEN (a OR b OR c) DO that

That is, fire whenever a,b or c change… we don’t care which. This is analogous to promise.race or promise.any (controversies about error handling aside) but in the real world is quite common when dealing with upstream dependencies, such as remembering to recompile source code whenever edits are saved or refresh a web page when underlying data changes — or for Wall Street, something like Excel at galactic scale.

For programming, we need something that looks more like a true database trigger.

Rho Calculus

A lot of the theory behind all this was originally developed by Robin Milner (process or π-calculus) around 1980 and also by Tony Hoare (communicating sequential processes) but has recently been gathered up into rho (reflective) calculus by Greg Meredith, one of the key architects behind the Microsoft BizTalk engine — who also realized that reflection/introspection is critical for evaluating system state (which leads you down the persistent memory-as-a-database rabbit hole). He’s in Seattle working on his own blockchain technology called RChain but the same concepts applies generally.

Basically the rho calculus (and the related programming language Rholang) is defined as follows:

P,Q,R ::= 0                 // nil or stopped process
  |   for( ptrn1 <- x1; … ; ptrnN <- xN ).P    // input guarded
| x!( @Q ) // output
| \*x\ // dereferenced or unquoted name
| P|Q // parallel composition
x,ptrn ::= @P               // name or quoted process

Got it? Ha! The reader is free to dive into the weeds here. The main takeaways are the input guards and the output dependencies. Input guards are essentially the same as “when ALL do this” and output dependencies are “when ANY do that”.

As an aside, the above ‘quoting’ and ‘unquoting’ is analogous to lifting/dropping in category theory as previously discussed here.

Input guards: Ethereum “multi-sig”

Input guards are perhaps most well-known in Ethereum as “multi-signature” contracts. Essentially the Ethereum virtual machine (VM) supports the notion of escrow — funds are only dispersed when all parties “sign off” that the transaction is okay to forward. Without getting into any particular blockchain details, below is a simple example of what this looks like. In our codebuilder, we define a function that only fires when all of its input parameters are ready:

Input guard example

Above you see a function multisig() that takes three parameters x,y,z. We also advance the processor clock manually so you can see how this all works:

  • The first clock tick (yellow) produces nothing as none of the inputs x,y,z exist yet. Note the guard is essentially referring to variables that aren’t even defined (hence the reflective part of the rho calculus)
  • Then we set x=3 and do another tick (purple). Again nothing happens
  • So we set z=9 and tick. Still nothing happens
  • Now we set y=11 and this time our function wakes up and fires on the tick (yay!) and prints a message. If this was Ethereum it would hopefully transfer funds at this point. Because the function does not do anything on prior ticks until all preconditions have “arrived” on the data stream (regardless of order), it is called input guarded

Output Dependencies

In our next example, we want our function to fire when any of the input(s) change. In the days of UNIX makefiles and slow compilers, looking at source code dependencies to figure out the minimal set of stuff to rebuild was a common challenge. Most of that is now automated into various IDEs. Nowadays, it’s typically found in frameworks like Redux or mobx that need to update UI changes.

Below in the orange box we attach output dependencies on a function sum():

Output dependencies example

This means we want sum() to automatically execute whenever the input parameters change. The red box shows we have linked the output of sum() to a variable z which makes it easier to see the new result whenever x or y change.

Output recomputations

Again, we manually advance the system clock so we can see how this works (actually it is a transaction boundary). Basically when either x or y change, our function should fire. Indeed, incrementing x (blue arrow) causes z to get the new sum=9 (blue box) and when we change y(red arrow) z again gets the new sum=10 (red box). The system messages are interleaved as such to preserve ordering.

The Wall Street Connection

The above constructs seem rather basic (at least until you try it yourself — be warned it can cost you billions) but input guards are of particular importance to trading systems — you don’t want a trade to execute until all the criteria have been met. Moreover, once the criteria have been met, you want to launch the trade as soon as possible (ideally the operation is synchronous).

For hot-loading, input guards help tell the system when fresh code has been loaded and how to run it. For AI, things get even more interesting because of a fundamental difference between programming and computing.

Similarly, managing output dependencies is critical in risk systems like SecDB, Athena etc. for minimizing re-computation of complex models. Mind you, Singh and Higgins were building a Python-esque approach to reactive memoization but tracking dependencies in large enterprise systems is a serious programming challenge generally. For NYC, dependency graphs are critical for impact analysis when tinkering with massive complex models and running scenarios for stress tests (banks can’t rely on bailouts forever).

The chasm between east and west coast becomes particularly acute when you consider NYC has been running massive production enterprises for years on these ‘persistent memory VMs’ that might ironically be waved off as experimental toys west of I-95 (indeed Meredith is taking heat for being one of the first to break with west coast mindset).

But what a new frontier! Rho calculus enables a new architecture for enterprise integration— Goldman managed to capture all business-wide activity into a single record of truth such as SecDB (most databases only show current state and historical logs must be mined and merged together). Note that event driven systems like CQRS eventually resemble a Turing tape. Hence, blockchain systems are often described as virtual machines and are tied closely with the notion of ‘distributed memory’. I think we will increasingly see companies of the future running on their own ‘virtual machines’. When I started my series long ago, I said that Silicon Valley fintech only got part of the Goldman story because they couldn’t penetrate the Visa platform. The decentralized web is the other part.

Anyway, I hope these examples help. None of this is particularly new to savvy industry analysts, but programmers on the ground are usually not told the whole story. I realize I am “cheating” by using an actual codebuilder to show how all this is supposed to work but I guess that is part of the dilemma: it is almost impossible to teach this stuff while still trapped inside von Neumann thinking.

Merry Christmas and Happy New Year!


Tallinn, Estonia

EDITOR’S NOTE: The Inception tooling (above) is an emerging Estonian enterprise dapp platform not affiliated with RChain and not supported by them in any way. We are aware that http://pyrofex.net was designated as RChain’s ‘official’ solution and they are not entertaining alternatives.

While we are excited to see RChain move forward, we prefer projects that value ecosystems over monolithic offerings (already a bit too common in the Redmond area). We’ve asked Holdings for a formal response so that investors are not confused.

For Inception on IPFS/IPLD, see here.

Like what you read? Give reinman a round of applause.

From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.