The story of Cerebral

The javascript framework Cerebral is about to get out of its infant stage. The last two years has truly been an amazing ride. Running open source projects with several contributors, creating a tool that is actually used in real life applications… it has been the best experience of my career. So I wanted to share the story of Cerebral this far and why you would want to consider using it.

TL;DR — Cerebral is a JavaScript framework with a debugger that gives you great insight into how your application works. We often label applications as complex when it is hard to build a mental image of what is going on. Cerebral helps you handle this complexity both in code and with the debugger.

It all started about two years ago. I wanted to create a code teaching tool that was able to record and replay code changes and other interactions in the browser. At this time I was looking into React and Flux, but for some reason did not like (understand) React. Flux made a lot of sense to me though, so I created an implementation called jFlux. The ideas of Flux fixed many of the challenges I had met earlier building complex applications

The good things about jFlux was:

  1. The view layer was separated from the state and business logic
  2. Any component in the view layer could access any state of the application without depending on any other parts of the view layer
  3. Any component could trigger any state change in the application

And this is where I had gone wrong so many times. As explained in the video above you will get into all sorts of problems when application state is intertwined with the view. When you build a small widget it is completely fine to put all the state inside the component, but then that component IS your “application”. When you have tens or even hundreds of components that depends on your application state, moving the state into a central state store allows any component to reach any state.

But there were also some bad things with jFlux:

  1. It was crazy slow compared to angular/react (It actually used jQuery for components)
  2. Defining actions inside stores made it difficult to follow complex flows of changes
  3. I had also completely underestimated the need and complexity of asynchronous flows

So even though things were a lot better than my previous experiences, I was still not happy. Then I found Baobab.

A single state tree

The Baobab project was exactly what I needed. Instead of defining actions inside multiple stores, I could flip it around. I could have a single tree and do controlled state changes from the outside. I could also run whatever asynchronous stuff I needed with the defined state changes.

Instead of multiple stores, splitting state changes and the flow of state changes into separate files (pseudo code):

class MyStore extends Store {
constructor() {
this.foo = 'bar'

this.listenTo('changeFoo', this.changeFoo)
}
changeFoo(value) {
this.foo = value
}
}

I could do everything in one function (pseudo code):

function getSomething() {
  tree.set(['something', 'isLoading'], true)
ajax.get('/something')
.then((result) => {
tree.set(['something', 'list'], result)
tree.set(['something', 'isLoading'], false)
})
.catch((error) => {
tree.set(['something, 'error'], error.message)
})
}

I could make my view layer ask for a change and then just do whatever state changes and complex asynchronous flows needed. Note here that this is still the “one way dataflow” of flux. But instead of creating rather verbose stores/reducers we just insert the new value on the path… because at the end of the day, that is exactly what a store/reducer does. Then the state tree tree will emit change events on the paths the components subscribes to. I wrote an article about this pattern here.

It was a success, but…

Even though I was happier and felt more in control than I ever had building complex web applications, I wanted to improve the developer experience. So when I saw that Elm had its own debugger…

…I got really inspired.

At this time I was also looking into immutability related to React and after watching “the database inside out”…

… it dawned on me how this actually worked. My view layer had to create a request for a state change, not actually do it. Then the application had to keep a history of these requests. When the application is in its initial state and these historic requests are rerun, well… my application should go to its historic state.

As it turned out, I was already requesting state changes:

tree.set(['foo', 'bar'], 'foo')

If I store the method name “set”, its path [‘foo’, ‘bar’] and its value ‘foo’, I had all the information I needed to rerun the state change.

A better debugger

So I implemented a debugger with time travel debugging. What I realised though is that the time travel debugging was not the thing that made the debugger so great, it was the log of state changes. And this is where I realised: We have these great tools for looking into our JavaScript, but it is very low level. What if we had a debugger on the same abstraction level as the application itself?

So I had some work cut out for me. First of all it was not enough to just show the log of state changes. In traditional examples we think of “increment” and “decrement” as “requesting a change”, but in real applications we might say “openNewsFeed”. Opening a newsfeed is crazy more complex than incrementing and decrementing a counter. We need to do many things like ensuring the user is logged in, maybe request multiple things from server, maybe the data returned requires new requests to the server. Also any of the requests might fail, which is also scenarios that needs to be handled. I wanted to show all of this information in the debugger and to do so I needed a way to express the flow. After experimenting a bit back and forth I realised that JavaScript actually has a really good way of defining flows:

[
doThis,
doThat
]

So doThis and doThat are functions that makes specific state changes. The array just indicates the order of how these state changes should run, one after the other. But what about asynchronous requests, how would I describe their conditional success and error outcomes? As it turns out, objects are good for that:

[
doThis,
getThat, {
success: [
setThat
],
error: [
setError
],
},
finallyDoThis
]

So now we have a structure the debugger can understand and visualise. They got a name, signals, and Cerebral was born. Combined with a single state tree it became evident how the debugger should work and after hundreds of hours and contributions, this is how it looks:

Running the flow of changes

We say that signals executes chains of actions. Where the chains are the arrays and actions are the functions referenced in the arrays.

As we kept experimenting we suddenly realised how powerful these arrays actually are. For example we can define two different chains and compose them together:

const chainA = [doThis, doThat]
const chainB = [doSomething, doSomethingElse]
[
...chainA,
...chainB
]

Composing chains like this is a very useful feature and you find yourself splitting existing chains into smaller chains to reuse parts of them in other chains.

Also factories proved themselves as a useful concept:

[
set(['foo', 'bar'], true)
]

Using factories we could describe state changes directly in the chain, without even creating a custom action for it. This concept became so useful that we decided to create a core set of action factories we call operators:

[
copy('input:foo', 'state:app.foo'),
delay(200),
set('state:app.bar', true)
]

We could even make chain factory operators. Like:

[
copy('input:title', 'state:app.title'),
...debounce(500, [
getTitles, {
success: [
showResults
],
error: []
}
])
]

We have spent a lot of time with these signals and they have proven again and again to be a really great way of describing what you want your application to do. You do not even have to be a programmer to understand what these chains describe. It is conceptually just a decision tree with some labels on it. Describing how our applications updates their state is where most of the complexity comes from, in my opinion.

Render performance

When we talk about view layers there are two things that affects its performance.

  1. Its pure speed of recalculating what changes is needed in the DOM
  2. Its ability to do recalculations at specific points in the component tree

Snabbdom is a superfast virtual dom implementation, but you can not tell a single component somewhere in your component tree to do a new recalculation. You always have to recalculate the whole application. React may be slower in pure speed, but you can point to a specific component, which Cerebral takes advantage of. Inferno is fast no matter what, but Cerebral also here takes advantage of its ability to recalculate a specific component in the component tree.

The way this works is that each component defines what state paths it is interested in. When Cerebral notifies the view layer about a change of state it actually tells what paths changed. Unlike Redux where each connected component needs to reiterate its state dependencies and see if their values changed, Cerebral can just iterate this small object defining what paths has been affected and tell specifically what components needs to recalculate. This means that Cerebral does not depend on immutability to optimize rendering. With this approach we can also visualise in the debugger how state changes affects your components, as seen in the video introduction above.

The team

A big part of the Cerebral story is also the team. Many people have contributed, but I want to mention specifically the initial core team of Aleksey, Garth and Brian. Aleksey is the super organised, low level and structured guy. Garth is the thoughtful, considerate and responsible guy. Brian is the very outgoing, funny and super optimistic guy. I am the impatient, always with a plan and always executing something guy. That does not mean we are purely only these personalities, it is more of how we balance each other out I think. And the funny thing is that this is completely coincidental. There were no interviews, no tests and no references to previous work… it was only enthusiasm for the project itself.

And this is what makes open source such a fantastic process. I still think a lot about how a guy from Russia, another in Vienna, one in Florida and me from Norway can find each other online, get together around a project and squeeze in an insane amount of hours of private time. Why do we do it? I can not speak on behalf of the other guys, and all the other contributors who have made Cerebral such a great experience, but I think they all can relate to making a difference for others is a fantastic feeling. I would not be doing this if it was not helpful to others. Of course it is not the perfect solution, no solutions are… but it is a real tool, solving real challenges of application development and I think it is worth a look :-)

Thanks for reading and if you want to know more, take a look at our launch site.