I plan to write a series of blogs about Ballerina. I will cover how, why etc. of various aspects of Ballerina. This blog covers how it was conceived.
We started the Apache Synapse project in 2005 — literally a few weeks after WSO2 was born. Synapse, and pretty much every ESB and ESB-like technology that exists today, relies on a dataflow model for configuring its behavior. The model fundamentally is that a message comes in and then you apply various bits of logic as the message (the data) flows through the bits of logic. When the flow finishes the final data is usually sent back as the response or forwarded to some other network endpoint.
When we started thinking about how to improve the Synapse-like approach, the first idea was to simply make the language cleaner and make things better incrementally. However, it was clear that unlike when we created Synapse, the world was going to be a lot more parallel and that almost everything would end up depending on various networked services. That is, just like in the old days we would use shared libraries to get access to other capabilities, we now use network services of all kinds.
Personally for me, the first inspiration to think differently about how to coordinate multiple parallel interactions came while watching the 2012 London Olympics opening. That sequence had people jumping from the sky, buildings being put up and removed, people appearing from the ground, fires starting and disappearing and more. How did the director of that show coordinate all of that?
I had the same question in my head whenever I watched a drama. How is it that everyone shows up and does their thing perfectly at the right time? In that case, each actress or actor takes cues from others, as do the people who do lights and set. Each participant works autonomously but always takes cues from others as a blocking signal on when to do their thing. When they can’t see the cue they have someone or something relay a message.
The final convincing experience was that, in the last 10+ years, every time that we had a complex situation to discuss with customers (often after something had gone wrong!) we’d always end up drawing a sequence diagram to explain who did what when and who waited for what to happen. If you look at the security team, they always used sequence diagrams to explain how complex message flows worked in federation like scenarios. Oh and we also used the wonderful tool websequencediagrams.com website to draw sketches as it had a simple text syntax for expressing a multi-actor sequence diagram which it would then render using various styles.
Thus, we were already using sequence diagrams to describe how things worked!
In August 2016, we had a design meeting in Colombo to discuss how to finally move forward on the NEL project. NEL (New ESB Language) was our internal codename for the project at the time. Prof. Frank Leymann, our co-conspirator from Stuttgart University, came to Colombo for it too and we blocked out most of the week to stop fuddling around with NEL and to make decisions and finally move forward with commitment.
At that meeting, after much debate, we committed to sequence diagrams as the model to go forward with both graphical and textual syntaxes. We decided to invent a way to program using sequence diagrams, not just describe how complex programs worked. That of course meant a textual syntax too — who wants to program for real just with pictures! We also committed that the graphic and textual syntaxes would maintain parity and would be 100% interchangeable. That is, the graphic view is not a picture but rather a canonical rendering of the model of the program, just like text is.
Finally, we also committed (for the first time in WSO2 history) to work on both an editor and the runtime at the same time and to keep them in sync as we progressed.
And so it began.