Programming Servo: implementing BroadcastChannel.

Gregory Terzian
Programming Servo
Published in
6 min readFeb 25, 2020

Following up on the implementation of MessagePort in Servo, let’s take a look how BroadcastChannel was implemented.

What is BroadcastChannel? I can’t do better than the MDN docs:

The BroadcastChannel interface represents a named channel that any browsing context of a given origin can subscribe to. It allows communication between different documents (in different windows, tabs, frames or iframes) of the same origin. Messages are broadcasted via a message event fired at all BroadcastChannel objects listening to the channel(source: https://developer.mozilla.org/en-US/docs/Web/API/BroadcastChannel).

Note that those channels cannot be transferred between windows, so this implementation was actually a bit easier than MessagePort. Other than that, it pretty much re-used the infra and patterns introduced by the implementation of the latter.

As a matter of fact, while implementing MessagePort took me about three months, this one took me about three days.

How come? The power of compounding technical assets, as opposed to technical debt.

Alongside private equity, software engineering seems to be one of those rare economic activities where people can confuse raking up (technical) debt with productivity. Sooner or later, the (capital, or software)structure collapses under it’s own weight, but by then it’s usually up to someone else to clean-up the mess.

So, when implementing MessagePort, as well as during a related PR restructuring how Blobs are serialized, I had two aims:

  1. Make the serialization, and transfer, of DOM objects easy.
  2. Make the multi-threading, and multi-processing, involved in cross-window message-passing, easy.

Point 1 was achieved by only using Spidermonkey’s APIs where they are needed, at the point of reading/writing the DOM values, and not using it where simply leveraging Rust’s Serde ecosystem would be more appropriate. Surely making something “serializable” shouldn’t be much harder than simply adding #[derive(Deserialize, Serialize)] to a struct?

(the answer is: it can be much harder if you use Spidermonkey for the entire operation)

Point 2 was done by not using shared-state, instead relying on a third component, the constellation, to act as a router of messages between sending and receiving windows, and as a giant lock around the data needed to make those routing decisions.

In other words, message-passing by the book.

And this initial work provided a set of proverbial technical assets, which made further work easier. Granted, it would have been an even better test if someone else had done the follow-up and found it easy(which is why I waited more than three months after having highlighted the issue in the previous article, before doing it myself). I was already satisfied that I experienced the follow-up as significantly facilitated by the work preceding it…

Now let’s take a look at it.

Constructing channels

Let’s look at things from the beginning:

var channel = new BroadcastChannel(channelName);

How does this look from a Rust perspective?

https://github.com/servo/servo/pull/25796/files#diff-e858d1a364e73c158d63885e1d8f2b44

So this essentially creates the corresponding DOM object, roots it, and returns it to the Javascript.

Let’s go a bit further into the call global.track_broadcast_channel(&*channel); , where some of the “magic sauce” happens.

https://github.com/servo/servo/pull/25796/files#diff-59d233642d0ce6d687484bdd009e1017

Now that’s a pretty big gist, so let’s go over it step-by-step:

  1. There is an enum called BroadcastChannelState, owned by this GlobalScope.
  2. When we start “tracking a new broadcast-channel”, we mutably borrow this enum via the refcell holding it.
  3. If it’s in the original UnManaged state, this means we need to setup some infra to enable the tracking of broadcast channels.
  4. We do this by:
    - setting up a route on the IPC router(which I’ve discussed in this previous article),
    - and by sending a ScriptMsg::NewBroadcastChannelRouter message to the constellation.
  5. We then go to the default “let’s track this channel” branch, where we
    - potentially create the entry corresponding this the name of the channel(yes the Javascript could for some reason decide to have multiple channels for the same name in a given global), which requires sending a ScriptMsg::NewBroadcastChannelNameInRouter message to the constellation.
    - Push the current channel to the back of the queue for this entry.

So now you’re wondering, what happens in the constellation when those message are handled?

Let’s first look at the handle of a ScriptMsg::NewBroadcastChannelRouter :

https://github.com/servo/servo/pull/25796/files#diff-55c92a6a5ba7654ce45fe6fc6c63740f

That probably speaks for itself.

What about the handling of ScriptMsg::NewBroadcastChannelNameInRouter ? Also straight-forward:

So, essentially, the global-scope sets-up some infra to locally manage broadcast channels, if this hasn’t been done already, and then messages with the constellation to let it know what is going on, and then stores a Dom<BroadcastChannel> for later use(wondering what Dom is? Take a look over here).

Since the global potentially send two messages to the constellation, it’s worth noting that they will indeed be received and handled in that order(sequential sends from the same thread are ordered).

Broadcasting messages

So, once we’ve returned this DomRoot<BroadcastChannel> to the Javascript, it can be used to start broadcasting, which will look something like:

var channel = new BroadcastChannel(channelName);
channel.postMessage(msg);

Again, let’s start by looking at how this calls into the Rust:

The call to structuredclone::write(cx, message, None) is how msg, the Javascript value, is serialized. Note that this can be all sorts of objects, like for example a Blob. It’s worth an article in itself(and I should probably write one in the light of the grandiose claims I made at the start of this one about “having made this easy”…).

Again, we can see a call into the global, at global.schedule_broadcast(msg, &self.id).

Let’s take a closer look at it:

So the “local broadcast” is something we’ll skip here, because it’s essentially what will happen when “the other globals” receive and broadcast the message. Let’s just say that since there can be other channels in the same global, and those should also see the broadcast, when sending one we first broadcast it locally. Other globals will later do essentially the same, but in response to receiving a message from the constellation…

So, the next step is again sending a message, ScriptMsg::ScheduleBroadcast, to the constellation. So let’s have a look at how it is being handled:

Again, I think it speaks for itself. Note we don’t broadcast the message to the global it is coming from, since we saw earlier the global will itself do a local broadcast.

So how do “the other globals” handle this broadcast? This is where the IPC route setup earlier will come into play. So let’s take a look into what this route looks like:

I’ve already covered the use of Trusted in the article on MessagePort, and I’ll do it again here(one can’t get enough of the good stuff).

So this BroadcastListener executes on the IPC router thread, that is not the same thread where the GlobalScope executes(that would be the window event-loop where this global belongs). Also, GlobalScope is not itself thread-safe.

So how can we be using this Trusted<GlobalScope> from the IPC router thread? Well, are we really using it? Note that we’re not actually using the global on that thread, instead we queue a task from that thread, and the global is used from within that task.

So that’s what happening: the Trusted is essentially a pointer that is Send, wrapping something that isn’t itself Send. The Trusted doesn’t allow you to use the internal thing, unless you are on the same same thread where it is coming from! And now you’re guessed it : the task we’re queuing from the IPC thread, will execute on the event-loop where the global is normally running…

As to why this IPC router thread is necessary? That’s not something worth covering again here, see the previous article for that.

So one thing we can see is that the queued task will call into broadcast_message_event, the same method the global called into when doing it’s own “local broadcast”, however this time it’s being called for each “other global” who should see this broadcast.

So now it’s time to look more into the details of this call:

Again, a pretty big gist.

The important point is the following, for each local BroadcastChannel object that subscribed to this particular “channel”(represented by the name of the channel), we queue another task to do two things:

  1. De-serialize the message into the corresponding DOM value, via a call to structuredclone::read.
  2. Fire the MessageEvent on the BroadcastChannel, using the deserialized value, which will actually trigger the execution of the JS event-handler set on it.

And that’s it folks, the executing Javascript will look something like:

channel.onmessage = function(ev) { console.log('message event received!'); };

--

--

Gregory Terzian
Programming Servo

I write in .js, .py, .rs, .tla, and English. Always for people to read