The Web being full of “how to …” type of articles, I’ll start this one from the presumption that “what works for me, might not for you”, hence the title “How I am…”.
This is an article about how I am currently in the process of being enlightened with regards to the craft of designing distributed systems, having the last few years focused mostly on concurrency(on a single machine).
But don’t worry, I’ll slip into the presumptive style soon enough.
Requirement number one: spend about 4 years working on a large and complicated concurrent system.
This is not meant as…
“Concurrency is hard” is a common refrain in our industry, and to that could be added “Stopping concurrency is harder”, which could further be supplemented with “Stopping other people’s code running concurrently is even harder”, or something like that.
Servo, being a web engine, is all about running “other people’s code”. And while that code perhaps isn’t “concurrent” in and of itself, it is run by the engine concurrently to other parts of it(such as “different other people’s code”, or the engine’s code itself).
Having recently discovered something called “a design application”(it’s really amazing, a whole new world opens-up to you), the time seems appropriate to provide a high-level overview of Servo,“Ze system”.
There goes nothing:
Ok that’s it, hope it’s clear and just message me if you have any question. Just kidding!
So first of all, this is a massive simplification, and yet it’s already complicated as it is. Some omissions:
However, I do think the above gives a pretty good sense of the structure, namely that:
Whenever I have a chance, I extol the virtues of message-passing and event-loops in structuring concurrent workflows in Rust. However, in my wide-eyed enthusiasm, I will make it sound almost as if it cannot go wrong.
So today, let’s go into some details about the archetype of buggy message-passing code.
I think this bug applies to pretty-much all message-passing code(the exception might be using synchronous bounded channels).
Even though I think native threads and Crossbeam channels are easier to work with than their equivalent from the async/futures ecosystem(“what do you mean the
impl Futuretrait bound is not met? It’s the channel…
Large swathes of the web platform are built on streaming data: that is, data that is created, processed, and consumed in an incremental fashion, without ever reading all of it into memory. The Streams Standard provides a common set of APIs for creating and interfacing with such streaming data, embodied in readable streams, writable streams, and transform streams.
I recommend taking a look at it since “streams” are a popular concept in Rust-land these days, and I think you’ll find that the streams defined in that…
Within the last couple of days of our global solitary retreat, I’ve stumbled by chance upon the Single Writer Principle™ concept, first via an article on Kafka, which mentioned it in passing, and then via this excellent article zooming in on it, and it was one of those “I’ve been looking for you for at least the past three weeks, where you’ve been hiding?” kind of moment.
So I’m not really a low-level concurrency expert, my focus is rather on high-level business logic type of designs, yet I’m amazed how often when you do dig into the lower-level aspects of…
As a follow-up on a previous article that explored some basic concurrent workflows in Rust, let’s now explore a slightly more advanced pattern: a concurrent pipeline where work is streamed from one stage to the next, and a way to signal back-pressure in that context. We’ll also look at the difference between “push” and “pull” sources of streaming data.
Code example at: https://github.com/gterzian/streaming_concurrent_workflow
Let’s start with the initial code, unlike the previous article, the code example is pretty big and complicated, right out of the gate:
So what’s going on here?
We have three different “components”, each representing a…
Rust is a language aimed among other things at improving the story around concurrency.
And indeed, the borrow checker will prevent the most egregious data races from occurring in the first place. That is the often hailed “Fearless” approach to concurrency.
However, as I have written before, that gets you only half of the way.
You, the programmer, still need to ensure that the concurrent business logic of your code is robust to the non-determinism that parallel execution entails.
Data races are one half of the story, while the other consists essentially of structuring concurrent logic around the ability to…
Servo is a big system written in Rust and doing lots of different things. One of those things is running JS and/or Wasm code in a VM, with that VM doubling-up as a compiler too when appropriate. That VM is called Spidermonkey, and this article is about how it integrates with Servo, which also makes it an article about how to integrate Spidermonkey with any Rust program.
The way I like to think about it, is as “the…
Service workers are a pretty cool feature of the web, enabling running code in the context of a worker, separately from a web-page, in response to various events(mostly network related events although anything could potentially integrate with one).
The emphasis is on separately from a web-page, unlike a Dedicated worker, a Service worker is also supposed to be able to run even when no page is running at all(for things like handling push notifications).
So, when implementing Service workers in Rust, it can be useful to do so in a way that highlights this “separateness” of the workers from running…